Test Report: Docker_Linux_crio_arm64 21847

                    
                      fa4d670f7aa2bf54fac775fb3c292483f6687320:2025-11-21:42430
                    
                

Test fail (36/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.33
35 TestAddons/parallel/Registry 15.12
36 TestAddons/parallel/RegistryCreds 0.51
37 TestAddons/parallel/Ingress 145.62
38 TestAddons/parallel/InspektorGadget 5.33
39 TestAddons/parallel/MetricsServer 5.42
41 TestAddons/parallel/CSI 40.27
42 TestAddons/parallel/Headlamp 3.32
43 TestAddons/parallel/CloudSpanner 5.3
44 TestAddons/parallel/LocalPath 9.43
45 TestAddons/parallel/NvidiaDevicePlugin 6.41
46 TestAddons/parallel/Yakd 5.28
97 TestFunctional/parallel/ServiceCmdConnect 603.53
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.91
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
135 TestFunctional/parallel/ServiceCmd/Format 0.47
136 TestFunctional/parallel/ServiceCmd/URL 0.51
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.7
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.53
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
191 TestJSONOutput/pause/Command 2.19
197 TestJSONOutput/unpause/Command 2.11
282 TestPause/serial/Pause 7.03
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.46
304 TestStartStop/group/old-k8s-version/serial/Pause 8.73
310 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.55
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.82
322 TestStartStop/group/no-preload/serial/Pause 6.56
328 TestStartStop/group/embed-certs/serial/Pause 7.76
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.45
339 TestStartStop/group/newest-cni/serial/Pause 6.74
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.19
351 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.1
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable volcano --alsologtostderr -v=1: exit status 11 (333.987853ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:59:04.876741  297653 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:59:04.877575  297653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:04.877610  297653 out.go:374] Setting ErrFile to fd 2...
	I1121 13:59:04.877628  297653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:04.877944  297653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 13:59:04.878278  297653 mustload.go:66] Loading cluster: addons-494116
	I1121 13:59:04.878701  297653 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:04.878738  297653 addons.go:622] checking whether the cluster is paused
	I1121 13:59:04.878933  297653 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:04.878967  297653 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:59:04.879463  297653 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:59:04.897286  297653 ssh_runner.go:195] Run: systemctl --version
	I1121 13:59:04.897341  297653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:59:04.919899  297653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:59:05.025716  297653 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:59:05.025866  297653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:59:05.079262  297653 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 13:59:05.079283  297653 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 13:59:05.079288  297653 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 13:59:05.079292  297653 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 13:59:05.079296  297653 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 13:59:05.079300  297653 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 13:59:05.079303  297653 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 13:59:05.079306  297653 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 13:59:05.079310  297653 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 13:59:05.079317  297653 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 13:59:05.079320  297653 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 13:59:05.079323  297653 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 13:59:05.079326  297653 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 13:59:05.079329  297653 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 13:59:05.079332  297653 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 13:59:05.079337  297653 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 13:59:05.079340  297653 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 13:59:05.079344  297653 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 13:59:05.079347  297653 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 13:59:05.079350  297653 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 13:59:05.079356  297653 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 13:59:05.079359  297653 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 13:59:05.079362  297653 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 13:59:05.079365  297653 cri.go:89] found id: ""
	I1121 13:59:05.079425  297653 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:59:05.097775  297653 out.go:203] 
	W1121 13:59:05.100713  297653 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:59:05.100785  297653 out.go:285] * 
	* 
	W1121 13:59:05.124020  297653 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:59:05.127079  297653 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.904199ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-cvgwr" [d4804f42-0759-4095-942d-fd20e6892955] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003485543s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-mlm5l" [cc2fce13-0044-46ba-9760-4efa6201f3f3] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00385732s
addons_test.go:392: (dbg) Run:  kubectl --context addons-494116 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-494116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-494116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.417591586s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable registry --alsologtostderr -v=1: exit status 11 (337.100151ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:59:30.269592  298586 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:59:30.270618  298586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:30.270635  298586 out.go:374] Setting ErrFile to fd 2...
	I1121 13:59:30.270641  298586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:30.270985  298586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 13:59:30.271355  298586 mustload.go:66] Loading cluster: addons-494116
	I1121 13:59:30.271817  298586 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:30.271837  298586 addons.go:622] checking whether the cluster is paused
	I1121 13:59:30.272034  298586 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:30.272055  298586 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:59:30.272606  298586 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:59:30.301949  298586 ssh_runner.go:195] Run: systemctl --version
	I1121 13:59:30.302006  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:59:30.323526  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:59:30.428607  298586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:59:30.428710  298586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:59:30.477998  298586 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 13:59:30.478022  298586 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 13:59:30.478028  298586 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 13:59:30.478044  298586 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 13:59:30.478048  298586 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 13:59:30.478053  298586 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 13:59:30.478056  298586 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 13:59:30.478059  298586 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 13:59:30.478062  298586 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 13:59:30.478069  298586 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 13:59:30.478075  298586 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 13:59:30.478078  298586 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 13:59:30.478082  298586 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 13:59:30.478088  298586 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 13:59:30.478091  298586 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 13:59:30.478096  298586 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 13:59:30.478102  298586 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 13:59:30.478106  298586 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 13:59:30.478109  298586 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 13:59:30.478111  298586 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 13:59:30.478117  298586 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 13:59:30.478120  298586 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 13:59:30.478123  298586 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 13:59:30.478125  298586 cri.go:89] found id: ""
	I1121 13:59:30.478180  298586 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:59:30.497695  298586 out.go:203] 
	W1121 13:59:30.500964  298586 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:59:30.500994  298586 out.go:285] * 
	* 
	W1121 13:59:30.506402  298586 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:59:30.509738  298586 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.12s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.76021ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-494116
addons_test.go:332: (dbg) Run:  kubectl --context addons-494116 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (262.535531ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:00:25.731888  300208 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:00:25.732629  300208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:00:25.732648  300208 out.go:374] Setting ErrFile to fd 2...
	I1121 14:00:25.732655  300208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:00:25.732982  300208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:00:25.733418  300208 mustload.go:66] Loading cluster: addons-494116
	I1121 14:00:25.733888  300208 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:00:25.733915  300208 addons.go:622] checking whether the cluster is paused
	I1121 14:00:25.734080  300208 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:00:25.734101  300208 host.go:66] Checking if "addons-494116" exists ...
	I1121 14:00:25.734796  300208 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 14:00:25.753083  300208 ssh_runner.go:195] Run: systemctl --version
	I1121 14:00:25.753153  300208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 14:00:25.776218  300208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 14:00:25.883137  300208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:00:25.883267  300208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:00:25.915512  300208 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 14:00:25.915542  300208 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 14:00:25.915547  300208 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 14:00:25.915551  300208 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 14:00:25.915556  300208 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 14:00:25.915560  300208 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 14:00:25.915563  300208 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 14:00:25.915567  300208 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 14:00:25.915570  300208 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 14:00:25.915577  300208 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 14:00:25.915583  300208 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 14:00:25.915587  300208 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 14:00:25.915591  300208 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 14:00:25.915594  300208 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 14:00:25.915598  300208 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 14:00:25.915603  300208 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 14:00:25.915610  300208 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 14:00:25.915613  300208 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 14:00:25.915617  300208 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 14:00:25.915620  300208 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 14:00:25.915624  300208 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 14:00:25.915627  300208 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 14:00:25.915631  300208 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 14:00:25.915634  300208 cri.go:89] found id: ""
	I1121 14:00:25.915697  300208 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:00:25.931807  300208 out.go:203] 
	W1121 14:00:25.934881  300208 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:00:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:00:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:00:25.934917  300208 out.go:285] * 
	* 
	W1121 14:00:25.939863  300208 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:00:25.942791  300208 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-494116 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-494116 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-494116 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [eb0f6414-281e-4cb7-81d8-cb34997a11f9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [eb0f6414-281e-4cb7-81d8-cb34997a11f9] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003541133s
I1121 13:59:51.887763  291060 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.835135462s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-494116 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-494116
helpers_test.go:243: (dbg) docker inspect addons-494116:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76",
	        "Created": "2025-11-21T13:56:52.980210617Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292223,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T13:56:53.060083115Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76/hostname",
	        "HostsPath": "/var/lib/docker/containers/e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76/hosts",
	        "LogPath": "/var/lib/docker/containers/e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76/e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76-json.log",
	        "Name": "/addons-494116",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-494116:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-494116",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76",
	                "LowerDir": "/var/lib/docker/overlay2/b16329cab56eeec1a57b3a7fc8e23d8becc0dc2af28741219de2d5e7efcbb21e-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b16329cab56eeec1a57b3a7fc8e23d8becc0dc2af28741219de2d5e7efcbb21e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b16329cab56eeec1a57b3a7fc8e23d8becc0dc2af28741219de2d5e7efcbb21e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b16329cab56eeec1a57b3a7fc8e23d8becc0dc2af28741219de2d5e7efcbb21e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-494116",
	                "Source": "/var/lib/docker/volumes/addons-494116/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-494116",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-494116",
	                "name.minikube.sigs.k8s.io": "addons-494116",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79d62b5fcc1fcb3e4f9091f14e0bbd056fa76c568576027ba5277b5908cb5326",
	            "SandboxKey": "/var/run/docker/netns/79d62b5fcc1f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-494116": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:ce:fc:f1:71:1b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "587264a7b645551a83ce3ffe958371206d7c19bdd86cc4c3f3fb0b4264d0950a",
	                    "EndpointID": "89d81790eddd7906eb2f05d3fdd58f8b77c2ee055f030450819822caa1d92169",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-494116",
	                        "e74411d169a0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-494116 -n addons-494116
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-494116 logs -n 25: (1.473741219s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-223827                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-223827 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ start   │ --download-only -p binary-mirror-355307 --alsologtostderr --binary-mirror http://127.0.0.1:37741 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-355307   │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	│ delete  │ -p binary-mirror-355307                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-355307   │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ addons  │ enable dashboard -p addons-494116                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-494116                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	│ start   │ -p addons-494116 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:59 UTC │
	│ addons  │ addons-494116 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ addons  │ addons-494116 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ addons  │ enable headlamp -p addons-494116 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ addons  │ addons-494116 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ addons  │ addons-494116 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ ip      │ addons-494116 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │ 21 Nov 25 13:59 UTC │
	│ addons  │ addons-494116 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ addons  │ addons-494116 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ addons  │ addons-494116 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ ssh     │ addons-494116 ssh cat /opt/local-path-provisioner/pvc-26d6969d-2083-4cb8-a3a0-2581439214de_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │ 21 Nov 25 13:59 UTC │
	│ addons  │ addons-494116 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ addons  │ addons-494116 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ ssh     │ addons-494116 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ addons  │ addons-494116 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 14:00 UTC │                     │
	│ addons  │ addons-494116 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 14:00 UTC │                     │
	│ addons  │ addons-494116 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 14:00 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-494116                                                                                                                                                                                                                                                                                                                                                                                           │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 14:00 UTC │ 21 Nov 25 14:00 UTC │
	│ addons  │ addons-494116 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 14:00 UTC │                     │
	│ ip      │ addons-494116 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 14:02 UTC │ 21 Nov 25 14:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 13:56:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 13:56:26.729816  291820 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:56:26.729942  291820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:26.729979  291820 out.go:374] Setting ErrFile to fd 2...
	I1121 13:56:26.730000  291820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:26.730270  291820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 13:56:26.730771  291820 out.go:368] Setting JSON to false
	I1121 13:56:26.731611  291820 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5939,"bootTime":1763727448,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 13:56:26.731723  291820 start.go:143] virtualization:  
	I1121 13:56:26.735163  291820 out.go:179] * [addons-494116] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 13:56:26.738222  291820 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 13:56:26.738355  291820 notify.go:221] Checking for updates...
	I1121 13:56:26.744055  291820 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 13:56:26.746898  291820 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 13:56:26.749755  291820 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 13:56:26.752553  291820 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 13:56:26.755438  291820 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 13:56:26.758561  291820 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 13:56:26.783802  291820 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 13:56:26.783947  291820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:26.850875  291820 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-21 13:56:26.84194549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 13:56:26.850984  291820 docker.go:319] overlay module found
	I1121 13:56:26.854046  291820 out.go:179] * Using the docker driver based on user configuration
	I1121 13:56:26.856895  291820 start.go:309] selected driver: docker
	I1121 13:56:26.856916  291820 start.go:930] validating driver "docker" against <nil>
	I1121 13:56:26.856931  291820 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 13:56:26.857675  291820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:26.909643  291820 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-21 13:56:26.900710987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 13:56:26.909787  291820 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 13:56:26.910023  291820 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 13:56:26.912881  291820 out.go:179] * Using Docker driver with root privileges
	I1121 13:56:26.915743  291820 cni.go:84] Creating CNI manager for ""
	I1121 13:56:26.915810  291820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:56:26.915830  291820 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 13:56:26.915958  291820 start.go:353] cluster config:
	{Name:addons-494116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-494116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1121 13:56:26.919032  291820 out.go:179] * Starting "addons-494116" primary control-plane node in "addons-494116" cluster
	I1121 13:56:26.921806  291820 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 13:56:26.924743  291820 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 13:56:26.928451  291820 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:56:26.928495  291820 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 13:56:26.928505  291820 cache.go:65] Caching tarball of preloaded images
	I1121 13:56:26.928596  291820 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 13:56:26.928608  291820 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 13:56:26.928960  291820 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/config.json ...
	I1121 13:56:26.928983  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/config.json: {Name:mk6b810371b11a03b9c7383d68ba56a04cef9656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:26.929153  291820 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 13:56:26.944697  291820 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:56:26.944818  291820 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1121 13:56:26.944844  291820 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1121 13:56:26.944849  291820 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1121 13:56:26.944860  291820 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1121 13:56:26.944869  291820 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from local cache
	I1121 13:56:44.880838  291820 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from cached tarball
	I1121 13:56:44.880874  291820 cache.go:243] Successfully downloaded all kic artifacts
	I1121 13:56:44.880902  291820 start.go:360] acquireMachinesLock for addons-494116: {Name:mk57a69ee47985a543fa348598b6ec0e32b4cb76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 13:56:44.881039  291820 start.go:364] duration metric: took 119.435µs to acquireMachinesLock for "addons-494116"
	I1121 13:56:44.881065  291820 start.go:93] Provisioning new machine with config: &{Name:addons-494116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-494116 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 13:56:44.881143  291820 start.go:125] createHost starting for "" (driver="docker")
	I1121 13:56:44.884563  291820 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1121 13:56:44.884801  291820 start.go:159] libmachine.API.Create for "addons-494116" (driver="docker")
	I1121 13:56:44.884838  291820 client.go:173] LocalClient.Create starting
	I1121 13:56:44.884961  291820 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem
	I1121 13:56:45.172232  291820 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem
	I1121 13:56:46.090506  291820 cli_runner.go:164] Run: docker network inspect addons-494116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 13:56:46.106635  291820 cli_runner.go:211] docker network inspect addons-494116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 13:56:46.106722  291820 network_create.go:284] running [docker network inspect addons-494116] to gather additional debugging logs...
	I1121 13:56:46.106744  291820 cli_runner.go:164] Run: docker network inspect addons-494116
	W1121 13:56:46.122638  291820 cli_runner.go:211] docker network inspect addons-494116 returned with exit code 1
	I1121 13:56:46.122670  291820 network_create.go:287] error running [docker network inspect addons-494116]: docker network inspect addons-494116: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-494116 not found
	I1121 13:56:46.122685  291820 network_create.go:289] output of [docker network inspect addons-494116]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-494116 not found
	
	** /stderr **
	I1121 13:56:46.122788  291820 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 13:56:46.140961  291820 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400199b230}
	I1121 13:56:46.141005  291820 network_create.go:124] attempt to create docker network addons-494116 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1121 13:56:46.141072  291820 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-494116 addons-494116
	I1121 13:56:46.200798  291820 network_create.go:108] docker network addons-494116 192.168.49.0/24 created
	I1121 13:56:46.200827  291820 kic.go:121] calculated static IP "192.168.49.2" for the "addons-494116" container
	I1121 13:56:46.200918  291820 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 13:56:46.216641  291820 cli_runner.go:164] Run: docker volume create addons-494116 --label name.minikube.sigs.k8s.io=addons-494116 --label created_by.minikube.sigs.k8s.io=true
	I1121 13:56:46.234821  291820 oci.go:103] Successfully created a docker volume addons-494116
	I1121 13:56:46.234906  291820 cli_runner.go:164] Run: docker run --rm --name addons-494116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-494116 --entrypoint /usr/bin/test -v addons-494116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 13:56:48.481816  291820 cli_runner.go:217] Completed: docker run --rm --name addons-494116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-494116 --entrypoint /usr/bin/test -v addons-494116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (2.246873865s)
	I1121 13:56:48.481848  291820 oci.go:107] Successfully prepared a docker volume addons-494116
	I1121 13:56:48.481902  291820 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:56:48.481912  291820 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 13:56:48.481972  291820 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-494116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 13:56:52.912298  291820 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-494116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.430284801s)
	I1121 13:56:52.912329  291820 kic.go:203] duration metric: took 4.430413812s to extract preloaded images to volume ...
	W1121 13:56:52.912513  291820 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 13:56:52.912627  291820 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 13:56:52.965421  291820 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-494116 --name addons-494116 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-494116 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-494116 --network addons-494116 --ip 192.168.49.2 --volume addons-494116:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 13:56:53.290587  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Running}}
	I1121 13:56:53.312476  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:56:53.336579  291820 cli_runner.go:164] Run: docker exec addons-494116 stat /var/lib/dpkg/alternatives/iptables
	I1121 13:56:53.389853  291820 oci.go:144] the created container "addons-494116" has a running status.
	I1121 13:56:53.389879  291820 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa...
	I1121 13:56:54.115307  291820 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 13:56:54.137424  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:56:54.164713  291820 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 13:56:54.164733  291820 kic_runner.go:114] Args: [docker exec --privileged addons-494116 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 13:56:54.217397  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:56:54.236808  291820 machine.go:94] provisionDockerMachine start ...
	I1121 13:56:54.237072  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:54.256822  291820 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:54.257288  291820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1121 13:56:54.257313  291820 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 13:56:54.403877  291820 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-494116
	
	I1121 13:56:54.403898  291820 ubuntu.go:182] provisioning hostname "addons-494116"
	I1121 13:56:54.403962  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:54.423901  291820 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:54.424537  291820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1121 13:56:54.424557  291820 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-494116 && echo "addons-494116" | sudo tee /etc/hostname
	I1121 13:56:54.577841  291820 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-494116
	
	I1121 13:56:54.577937  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:54.596785  291820 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:54.597085  291820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1121 13:56:54.597116  291820 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-494116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-494116/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-494116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 13:56:54.736781  291820 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 13:56:54.736808  291820 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 13:56:54.736839  291820 ubuntu.go:190] setting up certificates
	I1121 13:56:54.736855  291820 provision.go:84] configureAuth start
	I1121 13:56:54.736917  291820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-494116
	I1121 13:56:54.753975  291820 provision.go:143] copyHostCerts
	I1121 13:56:54.754066  291820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 13:56:54.754202  291820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 13:56:54.754266  291820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 13:56:54.754318  291820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.addons-494116 san=[127.0.0.1 192.168.49.2 addons-494116 localhost minikube]
	I1121 13:56:55.471745  291820 provision.go:177] copyRemoteCerts
	I1121 13:56:55.471826  291820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 13:56:55.471867  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:55.489556  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:56:55.588102  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 13:56:55.605998  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 13:56:55.623844  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 13:56:55.641969  291820 provision.go:87] duration metric: took 905.089341ms to configureAuth
	I1121 13:56:55.641995  291820 ubuntu.go:206] setting minikube options for container-runtime
	I1121 13:56:55.642190  291820 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:56:55.642289  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:55.659116  291820 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:55.659479  291820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1121 13:56:55.659496  291820 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 13:56:55.973162  291820 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 13:56:55.973185  291820 machine.go:97] duration metric: took 1.736328827s to provisionDockerMachine
	I1121 13:56:55.973196  291820 client.go:176] duration metric: took 11.088347531s to LocalClient.Create
	I1121 13:56:55.973210  291820 start.go:167] duration metric: took 11.088410743s to libmachine.API.Create "addons-494116"
	I1121 13:56:55.973218  291820 start.go:293] postStartSetup for "addons-494116" (driver="docker")
	I1121 13:56:55.973232  291820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 13:56:55.973298  291820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 13:56:55.973344  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:55.992652  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:56:56.092643  291820 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 13:56:56.096093  291820 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 13:56:56.096125  291820 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 13:56:56.096138  291820 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 13:56:56.096235  291820 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 13:56:56.096292  291820 start.go:296] duration metric: took 123.067355ms for postStartSetup
	I1121 13:56:56.096673  291820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-494116
	I1121 13:56:56.113143  291820 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/config.json ...
	I1121 13:56:56.113446  291820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 13:56:56.113494  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:56.130184  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:56:56.225265  291820 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 13:56:56.229839  291820 start.go:128] duration metric: took 11.348679178s to createHost
	I1121 13:56:56.229865  291820 start.go:83] releasing machines lock for "addons-494116", held for 11.348817616s
	I1121 13:56:56.229950  291820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-494116
	I1121 13:56:56.246720  291820 ssh_runner.go:195] Run: cat /version.json
	I1121 13:56:56.246772  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:56.246788  291820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 13:56:56.246849  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:56.270676  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:56:56.273950  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:56:56.367953  291820 ssh_runner.go:195] Run: systemctl --version
	I1121 13:56:56.467643  291820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 13:56:56.505583  291820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 13:56:56.510002  291820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 13:56:56.510074  291820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 13:56:56.538697  291820 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 13:56:56.538773  291820 start.go:496] detecting cgroup driver to use...
	I1121 13:56:56.538824  291820 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 13:56:56.538898  291820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 13:56:56.556605  291820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 13:56:56.569428  291820 docker.go:218] disabling cri-docker service (if available) ...
	I1121 13:56:56.569493  291820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 13:56:56.587130  291820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 13:56:56.605981  291820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 13:56:56.723958  291820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 13:56:56.838244  291820 docker.go:234] disabling docker service ...
	I1121 13:56:56.838320  291820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 13:56:56.858597  291820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 13:56:56.871734  291820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 13:56:56.988567  291820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 13:56:57.110459  291820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 13:56:57.122977  291820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 13:56:57.136725  291820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 13:56:57.136792  291820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.145298  291820 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 13:56:57.145417  291820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.154362  291820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.163016  291820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.171885  291820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 13:56:57.180315  291820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.188981  291820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.202151  291820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.210812  291820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 13:56:57.218792  291820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 13:56:57.226317  291820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 13:56:57.341083  291820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 13:56:57.517201  291820 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 13:56:57.517297  291820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 13:56:57.521327  291820 start.go:564] Will wait 60s for crictl version
	I1121 13:56:57.521402  291820 ssh_runner.go:195] Run: which crictl
	I1121 13:56:57.525236  291820 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 13:56:57.550046  291820 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 13:56:57.550149  291820 ssh_runner.go:195] Run: crio --version
	I1121 13:56:57.579451  291820 ssh_runner.go:195] Run: crio --version
	I1121 13:56:57.612819  291820 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 13:56:57.615689  291820 cli_runner.go:164] Run: docker network inspect addons-494116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 13:56:57.630884  291820 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1121 13:56:57.634921  291820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 13:56:57.644668  291820 kubeadm.go:884] updating cluster {Name:addons-494116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-494116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 13:56:57.644785  291820 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:56:57.644844  291820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 13:56:57.676751  291820 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 13:56:57.676775  291820 crio.go:433] Images already preloaded, skipping extraction
	I1121 13:56:57.676835  291820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 13:56:57.700771  291820 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 13:56:57.700797  291820 cache_images.go:86] Images are preloaded, skipping loading
	I1121 13:56:57.700805  291820 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1121 13:56:57.700897  291820 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-494116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-494116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 13:56:57.700984  291820 ssh_runner.go:195] Run: crio config
	I1121 13:56:57.758834  291820 cni.go:84] Creating CNI manager for ""
	I1121 13:56:57.758867  291820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:56:57.758886  291820 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 13:56:57.758909  291820 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-494116 NodeName:addons-494116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 13:56:57.759068  291820 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-494116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 13:56:57.759156  291820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 13:56:57.767215  291820 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 13:56:57.767359  291820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 13:56:57.775099  291820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1121 13:56:57.788186  291820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 13:56:57.801148  291820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1121 13:56:57.814308  291820 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1121 13:56:57.817892  291820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 13:56:57.827382  291820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 13:56:57.934722  291820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 13:56:57.950215  291820 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116 for IP: 192.168.49.2
	I1121 13:56:57.950238  291820 certs.go:195] generating shared ca certs ...
	I1121 13:56:57.950254  291820 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:57.950424  291820 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 13:56:58.340446  291820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt ...
	I1121 13:56:58.340479  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt: {Name:mk01ef5db40284bad7e0471d9cd816e60aef2b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:58.340701  291820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key ...
	I1121 13:56:58.340718  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key: {Name:mkddaed89356e289a6f4f6f92ae42c242180f1b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:58.340811  291820 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 13:56:59.233088  291820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt ...
	I1121 13:56:59.233123  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt: {Name:mkae5d1cd20f520064d91f058cc7cf77381cc0dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:59.233299  291820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key ...
	I1121 13:56:59.233312  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key: {Name:mkf3e8a12a260634981c0a73f3ea867340b04447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:59.233406  291820 certs.go:257] generating profile certs ...
	I1121 13:56:59.233464  291820 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.key
	I1121 13:56:59.233482  291820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt with IP's: []
	I1121 13:56:59.637624  291820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt ...
	I1121 13:56:59.637657  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: {Name:mk4153a46c912172ef9e929bdb69daf498e63595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:59.637844  291820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.key ...
	I1121 13:56:59.637858  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.key: {Name:mkb94890a5ce0a35bb57f616068aa0a91111d832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:59.637944  291820 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.key.8fb2f0cb
	I1121 13:56:59.637965  291820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.crt.8fb2f0cb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1121 13:57:00.552224  291820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.crt.8fb2f0cb ...
	I1121 13:57:00.552265  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.crt.8fb2f0cb: {Name:mk558f81bfae5871bad15b338e986a8536c6e4eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:57:00.552489  291820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.key.8fb2f0cb ...
	I1121 13:57:00.552511  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.key.8fb2f0cb: {Name:mkfd84d898dc10022503813840a4ac1695fb5ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:57:00.552593  291820 certs.go:382] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.crt.8fb2f0cb -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.crt
	I1121 13:57:00.552687  291820 certs.go:386] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.key.8fb2f0cb -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.key
	I1121 13:57:00.552738  291820 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.key
	I1121 13:57:00.552759  291820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.crt with IP's: []
	I1121 13:57:01.088130  291820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.crt ...
	I1121 13:57:01.088164  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.crt: {Name:mk94379303f53e5344b40a7999289ea8885bcde6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:57:01.088348  291820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.key ...
	I1121 13:57:01.088363  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.key: {Name:mkc4cde24a02f2ff2cdb01962dce6dda257c577d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:57:01.088575  291820 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 13:57:01.088622  291820 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 13:57:01.088649  291820 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 13:57:01.088680  291820 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 13:57:01.089608  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 13:57:01.111153  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 13:57:01.129574  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 13:57:01.148784  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 13:57:01.168220  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 13:57:01.188040  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 13:57:01.207518  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 13:57:01.226338  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 13:57:01.244460  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 13:57:01.263536  291820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 13:57:01.277620  291820 ssh_runner.go:195] Run: openssl version
	I1121 13:57:01.284224  291820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 13:57:01.293221  291820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 13:57:01.297477  291820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 13:57:01.297550  291820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 13:57:01.339475  291820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 13:57:01.348344  291820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 13:57:01.352101  291820 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 13:57:01.352151  291820 kubeadm.go:401] StartCluster: {Name:addons-494116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-494116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 13:57:01.352225  291820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:57:01.352289  291820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:57:01.381277  291820 cri.go:89] found id: ""
	I1121 13:57:01.381364  291820 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 13:57:01.389616  291820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 13:57:01.397714  291820 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 13:57:01.397833  291820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 13:57:01.406076  291820 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 13:57:01.406102  291820 kubeadm.go:158] found existing configuration files:
	
	I1121 13:57:01.406155  291820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 13:57:01.414329  291820 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 13:57:01.414417  291820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 13:57:01.422408  291820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 13:57:01.430855  291820 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 13:57:01.430951  291820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 13:57:01.438812  291820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 13:57:01.447299  291820 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 13:57:01.447373  291820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 13:57:01.455362  291820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 13:57:01.463798  291820 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 13:57:01.463895  291820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 13:57:01.471749  291820 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 13:57:01.538851  291820 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 13:57:01.539138  291820 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 13:57:01.610817  291820 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 13:57:16.606218  291820 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 13:57:16.606274  291820 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 13:57:16.606366  291820 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 13:57:16.606423  291820 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 13:57:16.606459  291820 kubeadm.go:319] OS: Linux
	I1121 13:57:16.606509  291820 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 13:57:16.606559  291820 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 13:57:16.606608  291820 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 13:57:16.606658  291820 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 13:57:16.606708  291820 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 13:57:16.606760  291820 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 13:57:16.606807  291820 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 13:57:16.606857  291820 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 13:57:16.606905  291820 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 13:57:16.606980  291820 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 13:57:16.607079  291820 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 13:57:16.607172  291820 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 13:57:16.607236  291820 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 13:57:16.610181  291820 out.go:252]   - Generating certificates and keys ...
	I1121 13:57:16.610369  291820 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 13:57:16.610463  291820 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 13:57:16.610539  291820 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 13:57:16.610609  291820 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 13:57:16.610688  291820 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 13:57:16.610746  291820 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 13:57:16.610813  291820 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 13:57:16.610966  291820 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-494116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 13:57:16.611069  291820 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 13:57:16.611231  291820 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-494116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 13:57:16.611341  291820 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 13:57:16.611444  291820 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 13:57:16.611503  291820 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 13:57:16.611568  291820 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 13:57:16.611627  291820 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 13:57:16.611692  291820 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 13:57:16.611761  291820 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 13:57:16.611835  291820 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 13:57:16.611896  291820 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 13:57:16.611987  291820 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 13:57:16.612061  291820 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 13:57:16.615205  291820 out.go:252]   - Booting up control plane ...
	I1121 13:57:16.615323  291820 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 13:57:16.615412  291820 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 13:57:16.615504  291820 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 13:57:16.615648  291820 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 13:57:16.615788  291820 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 13:57:16.615923  291820 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 13:57:16.616021  291820 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 13:57:16.616066  291820 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 13:57:16.616205  291820 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 13:57:16.616319  291820 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 13:57:16.616413  291820 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50089421s
	I1121 13:57:16.616519  291820 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 13:57:16.616632  291820 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1121 13:57:16.616743  291820 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 13:57:16.616852  291820 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 13:57:16.616941  291820 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.623433392s
	I1121 13:57:16.617021  291820 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.168963206s
	I1121 13:57:16.617111  291820 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001384069s
	I1121 13:57:16.617235  291820 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 13:57:16.617373  291820 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 13:57:16.617449  291820 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 13:57:16.617687  291820 kubeadm.go:319] [mark-control-plane] Marking the node addons-494116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 13:57:16.617766  291820 kubeadm.go:319] [bootstrap-token] Using token: 3aw9oe.cyfsti0enmout33u
	I1121 13:57:16.622688  291820 out.go:252]   - Configuring RBAC rules ...
	I1121 13:57:16.622859  291820 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 13:57:16.622971  291820 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 13:57:16.623148  291820 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 13:57:16.623342  291820 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 13:57:16.623505  291820 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 13:57:16.623631  291820 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 13:57:16.623773  291820 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 13:57:16.623854  291820 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 13:57:16.623921  291820 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 13:57:16.623949  291820 kubeadm.go:319] 
	I1121 13:57:16.624058  291820 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 13:57:16.624072  291820 kubeadm.go:319] 
	I1121 13:57:16.624154  291820 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 13:57:16.624163  291820 kubeadm.go:319] 
	I1121 13:57:16.624190  291820 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 13:57:16.624260  291820 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 13:57:16.624321  291820 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 13:57:16.624333  291820 kubeadm.go:319] 
	I1121 13:57:16.624409  291820 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 13:57:16.624452  291820 kubeadm.go:319] 
	I1121 13:57:16.624541  291820 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 13:57:16.624554  291820 kubeadm.go:319] 
	I1121 13:57:16.624636  291820 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 13:57:16.624752  291820 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 13:57:16.624864  291820 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 13:57:16.624908  291820 kubeadm.go:319] 
	I1121 13:57:16.625022  291820 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 13:57:16.625116  291820 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 13:57:16.625124  291820 kubeadm.go:319] 
	I1121 13:57:16.625240  291820 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3aw9oe.cyfsti0enmout33u \
	I1121 13:57:16.625362  291820 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 \
	I1121 13:57:16.625389  291820 kubeadm.go:319] 	--control-plane 
	I1121 13:57:16.625396  291820 kubeadm.go:319] 
	I1121 13:57:16.625523  291820 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 13:57:16.625561  291820 kubeadm.go:319] 
	I1121 13:57:16.625677  291820 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3aw9oe.cyfsti0enmout33u \
	I1121 13:57:16.625833  291820 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 
	I1121 13:57:16.625870  291820 cni.go:84] Creating CNI manager for ""
	I1121 13:57:16.625882  291820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:57:16.629003  291820 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 13:57:16.631910  291820 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 13:57:16.636659  291820 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 13:57:16.636681  291820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 13:57:16.649468  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 13:57:16.940450  291820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 13:57:16.940579  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:16.940659  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-494116 minikube.k8s.io/updated_at=2025_11_21T13_57_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=addons-494116 minikube.k8s.io/primary=true
	I1121 13:57:17.181868  291820 ops.go:34] apiserver oom_adj: -16
	I1121 13:57:17.181972  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:17.682140  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:18.182529  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:18.682913  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:19.182468  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:19.682519  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:20.182156  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:20.682093  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:21.182130  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:21.682677  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:21.792148  291820 kubeadm.go:1114] duration metric: took 4.851615238s to wait for elevateKubeSystemPrivileges
	I1121 13:57:21.792178  291820 kubeadm.go:403] duration metric: took 20.440031677s to StartCluster
	I1121 13:57:21.792195  291820 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:57:21.792303  291820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 13:57:21.792733  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:57:21.792966  291820 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 13:57:21.793066  291820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 13:57:21.793325  291820 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:57:21.793370  291820 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1121 13:57:21.793489  291820 addons.go:70] Setting yakd=true in profile "addons-494116"
	I1121 13:57:21.793508  291820 addons.go:239] Setting addon yakd=true in "addons-494116"
	I1121 13:57:21.793534  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.793512  291820 addons.go:70] Setting inspektor-gadget=true in profile "addons-494116"
	I1121 13:57:21.793585  291820 addons.go:239] Setting addon inspektor-gadget=true in "addons-494116"
	I1121 13:57:21.793636  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.794011  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.794219  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.794611  291820 addons.go:70] Setting metrics-server=true in profile "addons-494116"
	I1121 13:57:21.794630  291820 addons.go:239] Setting addon metrics-server=true in "addons-494116"
	I1121 13:57:21.794660  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.795078  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.796960  291820 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-494116"
	I1121 13:57:21.796990  291820 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-494116"
	I1121 13:57:21.797018  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.797444  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.797891  291820 addons.go:70] Setting cloud-spanner=true in profile "addons-494116"
	I1121 13:57:21.797916  291820 addons.go:239] Setting addon cloud-spanner=true in "addons-494116"
	I1121 13:57:21.797941  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.798342  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.803931  291820 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-494116"
	I1121 13:57:21.803962  291820 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-494116"
	I1121 13:57:21.804006  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.804497  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.808368  291820 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-494116"
	I1121 13:57:21.808443  291820 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-494116"
	I1121 13:57:21.808474  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.808931  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.814077  291820 addons.go:70] Setting registry=true in profile "addons-494116"
	I1121 13:57:21.814165  291820 addons.go:239] Setting addon registry=true in "addons-494116"
	I1121 13:57:21.814286  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.847397  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.817209  291820 addons.go:70] Setting registry-creds=true in profile "addons-494116"
	I1121 13:57:21.866895  291820 addons.go:239] Setting addon registry-creds=true in "addons-494116"
	I1121 13:57:21.866965  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.817228  291820 addons.go:70] Setting storage-provisioner=true in profile "addons-494116"
	I1121 13:57:21.876846  291820 addons.go:239] Setting addon storage-provisioner=true in "addons-494116"
	I1121 13:57:21.876913  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.877479  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.817412  291820 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-494116"
	I1121 13:57:21.886785  291820 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-494116"
	I1121 13:57:21.887133  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.887527  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.817422  291820 addons.go:70] Setting volcano=true in profile "addons-494116"
	I1121 13:57:21.898555  291820 addons.go:239] Setting addon volcano=true in "addons-494116"
	I1121 13:57:21.898596  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.899049  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.907154  291820 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1121 13:57:21.910028  291820 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1121 13:57:21.910159  291820 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1121 13:57:21.910170  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1121 13:57:21.910228  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:21.817434  291820 addons.go:70] Setting volumesnapshots=true in profile "addons-494116"
	I1121 13:57:21.921367  291820 addons.go:239] Setting addon volumesnapshots=true in "addons-494116"
	I1121 13:57:21.921442  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.922052  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.817608  291820 out.go:179] * Verifying Kubernetes components...
	I1121 13:57:21.819339  291820 addons.go:70] Setting default-storageclass=true in profile "addons-494116"
	I1121 13:57:21.930394  291820 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-494116"
	I1121 13:57:21.930714  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.819353  291820 addons.go:70] Setting gcp-auth=true in profile "addons-494116"
	I1121 13:57:21.940538  291820 mustload.go:66] Loading cluster: addons-494116
	I1121 13:57:21.940742  291820 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:57:21.941004  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.950647  291820 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 13:57:21.950669  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1121 13:57:21.950726  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:21.819361  291820 addons.go:70] Setting ingress=true in profile "addons-494116"
	I1121 13:57:21.962533  291820 addons.go:239] Setting addon ingress=true in "addons-494116"
	I1121 13:57:21.962675  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.963485  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.819370  291820 addons.go:70] Setting ingress-dns=true in profile "addons-494116"
	I1121 13:57:21.990305  291820 addons.go:239] Setting addon ingress-dns=true in "addons-494116"
	I1121 13:57:21.990365  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.990880  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:22.005341  291820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 13:57:22.010189  291820 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1121 13:57:22.015395  291820 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 13:57:22.015422  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1121 13:57:22.015506  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.066714  291820 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1121 13:57:22.076584  291820 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1121 13:57:22.076654  291820 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1121 13:57:22.076781  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.077089  291820 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 13:57:22.086242  291820 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 13:57:22.086335  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 13:57:22.086439  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.120165  291820 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1121 13:57:22.123271  291820 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1121 13:57:22.123378  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1121 13:57:22.123492  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.140160  291820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 13:57:22.163695  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1121 13:57:22.169145  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1121 13:57:22.172204  291820 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1121 13:57:22.174997  291820 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 13:57:22.175018  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1121 13:57:22.175079  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	W1121 13:57:22.194949  291820 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1121 13:57:22.196695  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1121 13:57:22.197791  291820 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1121 13:57:22.199793  291820 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1121 13:57:22.203087  291820 addons.go:239] Setting addon default-storageclass=true in "addons-494116"
	I1121 13:57:22.203145  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:22.203605  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:22.208652  291820 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 13:57:22.208673  291820 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 13:57:22.208737  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.216516  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1121 13:57:22.218989  291820 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 13:57:22.219238  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:22.246949  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1121 13:57:22.247001  291820 out.go:179]   - Using image docker.io/registry:3.0.0
	I1121 13:57:22.247062  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.236102  291820 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-494116"
	I1121 13:57:22.247558  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:22.251073  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:22.264631  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.265423  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.271881  291820 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1121 13:57:22.271963  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1121 13:57:22.272020  291820 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1121 13:57:22.272029  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1121 13:57:22.272094  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.279742  291820 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 13:57:22.280098  291820 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1121 13:57:22.280793  291820 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1121 13:57:22.280873  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.280156  291820 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 13:57:22.282466  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1121 13:57:22.282539  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.296735  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1121 13:57:22.299762  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1121 13:57:22.305881  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1121 13:57:22.308672  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1121 13:57:22.308699  291820 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1121 13:57:22.308784  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.330375  291820 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1121 13:57:22.334325  291820 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 13:57:22.334351  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1121 13:57:22.334416  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.362823  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.389701  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.392615  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.402936  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.409489  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.420477  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.441072  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.448763  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	W1121 13:57:22.450515  291820 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 13:57:22.450545  291820 retry.go:31] will retry after 339.163073ms: ssh: handshake failed: EOF
	I1121 13:57:22.464559  291820 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 13:57:22.464580  291820 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 13:57:22.464650  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.491115  291820 out.go:179]   - Using image docker.io/busybox:stable
	I1121 13:57:22.494142  291820 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1121 13:57:22.498245  291820 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 13:57:22.498268  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1121 13:57:22.498339  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.520624  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	W1121 13:57:22.525022  291820 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 13:57:22.525032  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.525049  291820 retry.go:31] will retry after 138.757203ms: ssh: handshake failed: EOF
	I1121 13:57:22.556125  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.560313  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	W1121 13:57:22.561524  291820 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 13:57:22.561548  291820 retry.go:31] will retry after 194.866906ms: ssh: handshake failed: EOF
	I1121 13:57:22.585633  291820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1121 13:57:22.665051  291820 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 13:57:22.665133  291820 retry.go:31] will retry after 345.097013ms: ssh: handshake failed: EOF
	I1121 13:57:22.948600  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1121 13:57:23.011498  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 13:57:23.013939  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 13:57:23.017493  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 13:57:23.017827  291820 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1121 13:57:23.017864  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1121 13:57:23.020426  291820 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1121 13:57:23.020491  291820 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1121 13:57:23.034695  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 13:57:23.035647  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 13:57:23.037833  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1121 13:57:23.116473  291820 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1121 13:57:23.116496  291820 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1121 13:57:23.139429  291820 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1121 13:57:23.139460  291820 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1121 13:57:23.147599  291820 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1121 13:57:23.147621  291820 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1121 13:57:23.176861  291820 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1121 13:57:23.176935  291820 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1121 13:57:23.178486  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 13:57:23.210735  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 13:57:23.226133  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1121 13:57:23.226210  291820 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1121 13:57:23.268222  291820 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1121 13:57:23.268294  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1121 13:57:23.269033  291820 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1121 13:57:23.269082  291820 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1121 13:57:23.293729  291820 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.153530463s)
	I1121 13:57:23.293756  291820 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1121 13:57:23.295258  291820 node_ready.go:35] waiting up to 6m0s for node "addons-494116" to be "Ready" ...
	I1121 13:57:23.374818  291820 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 13:57:23.374892  291820 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1121 13:57:23.406411  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1121 13:57:23.406488  291820 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1121 13:57:23.436974  291820 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1121 13:57:23.437051  291820 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1121 13:57:23.461906  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1121 13:57:23.480097  291820 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1121 13:57:23.480171  291820 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1121 13:57:23.564258  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 13:57:23.567024  291820 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1121 13:57:23.567091  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1121 13:57:23.603483  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 13:57:23.633480  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1121 13:57:23.633556  291820 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1121 13:57:23.652353  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1121 13:57:23.652440  291820 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1121 13:57:23.689552  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1121 13:57:23.689632  291820 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1121 13:57:23.722836  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1121 13:57:23.722913  291820 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1121 13:57:23.761398  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1121 13:57:23.797069  291820 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-494116" context rescaled to 1 replicas
	I1121 13:57:23.915353  291820 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1121 13:57:23.915376  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1121 13:57:23.917926  291820 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 13:57:23.917945  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1121 13:57:24.155174  291820 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1121 13:57:24.155200  291820 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1121 13:57:24.167522  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 13:57:24.350452  291820 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1121 13:57:24.350476  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1121 13:57:24.596616  291820 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1121 13:57:24.596641  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1121 13:57:24.858252  291820 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 13:57:24.858282  291820 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1121 13:57:24.989612  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.040920593s)
	I1121 13:57:24.989704  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.978130835s)
	I1121 13:57:24.989746  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.975736924s)
	I1121 13:57:25.153081  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1121 13:57:25.326756  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:26.821777  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.804201922s)
	I1121 13:57:26.821855  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.78709922s)
	I1121 13:57:27.725894  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.690175519s)
	I1121 13:57:27.725926  291820 addons.go:495] Verifying addon ingress=true in "addons-494116"
	I1121 13:57:27.726148  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.688250576s)
	I1121 13:57:27.726220  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.547664303s)
	I1121 13:57:27.726287  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.515492076s)
	I1121 13:57:27.726507  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.264527904s)
	I1121 13:57:27.726527  291820 addons.go:495] Verifying addon registry=true in "addons-494116"
	I1121 13:57:27.726968  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.162642593s)
	I1121 13:57:27.726988  291820 addons.go:495] Verifying addon metrics-server=true in "addons-494116"
	I1121 13:57:27.727031  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.123479736s)
	I1121 13:57:27.727071  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.965601344s)
	I1121 13:57:27.727308  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.559754622s)
	W1121 13:57:27.728074  291820 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 13:57:27.728095  291820 retry.go:31] will retry after 182.892821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 13:57:27.728916  291820 out.go:179] * Verifying ingress addon...
	I1121 13:57:27.730898  291820 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-494116 service yakd-dashboard -n yakd-dashboard
	
	I1121 13:57:27.730897  291820 out.go:179] * Verifying registry addon...
	I1121 13:57:27.734695  291820 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1121 13:57:27.734695  291820 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1121 13:57:27.744656  291820 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 13:57:27.744676  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:27.745007  291820 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1121 13:57:27.745022  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:27.805358  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:27.911653  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 13:57:28.099584  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.946452826s)
	I1121 13:57:28.099666  291820 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-494116"
	I1121 13:57:28.102619  291820 out.go:179] * Verifying csi-hostpath-driver addon...
	I1121 13:57:28.106371  291820 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1121 13:57:28.119007  291820 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 13:57:28.119030  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:28.239193  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:28.239326  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:28.609846  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:28.739413  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:28.739862  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:29.109700  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:29.238894  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:29.239005  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:29.609932  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:29.738655  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:29.738788  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:29.848009  291820 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1121 13:57:29.848098  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:29.865625  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:29.979353  291820 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1121 13:57:30.004198  291820 addons.go:239] Setting addon gcp-auth=true in "addons-494116"
	I1121 13:57:30.004252  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:30.004761  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:30.038251  291820 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1121 13:57:30.038309  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:30.089299  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:30.110688  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:30.237957  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:30.238292  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:30.299042  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:30.610573  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:30.708661  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.796910559s)
	I1121 13:57:30.711981  291820 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 13:57:30.715030  291820 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1121 13:57:30.717780  291820 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1121 13:57:30.717803  291820 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1121 13:57:30.733288  291820 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1121 13:57:30.733311  291820 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1121 13:57:30.740114  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:30.740317  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:30.747990  291820 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 13:57:30.748014  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1121 13:57:30.761129  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 13:57:31.112561  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:31.241036  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:31.253566  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:31.274461  291820 addons.go:495] Verifying addon gcp-auth=true in "addons-494116"
	I1121 13:57:31.277402  291820 out.go:179] * Verifying gcp-auth addon...
	I1121 13:57:31.281131  291820 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1121 13:57:31.290354  291820 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1121 13:57:31.290381  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:31.610210  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:31.738643  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:31.738793  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:31.784366  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:32.109314  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:32.239648  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:32.240032  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:32.284878  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:32.609864  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:32.738675  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:32.738881  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:32.785084  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:32.803248  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:33.109999  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:33.238179  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:33.238588  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:33.284155  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:33.609770  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:33.737808  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:33.738176  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:33.785055  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:34.110398  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:34.239223  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:34.239349  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:34.285372  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:34.610054  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:34.738416  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:34.738770  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:34.784700  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:35.109990  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:35.238329  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:35.238626  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:35.284514  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:35.298208  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:35.610459  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:35.738795  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:35.739239  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:35.784256  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:36.110057  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:36.238245  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:36.238352  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:36.284275  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:36.609565  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:36.737918  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:36.738160  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:36.783939  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:37.110548  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:37.238528  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:37.238941  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:37.284750  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:37.298246  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:37.609576  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:37.738903  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:37.738947  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:37.784515  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:38.109667  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:38.238907  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:38.239250  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:38.283964  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:38.609979  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:38.738211  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:38.738376  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:38.784027  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:39.109741  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:39.237618  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:39.238029  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:39.285123  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:39.298938  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:39.609914  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:39.737946  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:39.738258  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:39.784088  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:40.111249  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:40.238825  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:40.238972  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:40.284836  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:40.609765  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:40.737963  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:40.738009  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:40.784638  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:41.110518  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:41.238649  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:41.238736  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:41.284974  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:41.609756  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:41.738757  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:41.738892  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:41.785025  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:41.802563  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:42.110151  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:42.239168  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:42.239531  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:42.284444  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:42.609679  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:42.738726  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:42.738970  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:42.786111  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:43.110713  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:43.238712  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:43.238865  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:43.284654  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:43.609962  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:43.738745  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:43.738901  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:43.785082  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:44.109430  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:44.241827  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:44.242047  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:44.285017  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:44.298820  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:44.609977  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:44.737987  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:44.738963  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:44.784655  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:45.111792  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:45.240336  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:45.243327  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:45.285536  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:45.609672  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:45.739206  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:45.739650  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:45.784512  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:46.109444  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:46.238433  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:46.238566  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:46.284489  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:46.610551  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:46.739432  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:46.739543  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:46.784219  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:46.801905  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:47.110641  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:47.237838  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:47.238710  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:47.284632  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:47.609260  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:47.738639  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:47.738744  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:47.787795  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:48.109601  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:48.238100  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:48.238305  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:48.284276  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:48.610226  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:48.739556  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:48.739949  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:48.784767  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:49.110553  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:49.238068  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:49.238102  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:49.284621  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:49.298387  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:49.609440  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:49.739187  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:49.740347  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:49.784106  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:50.109582  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:50.239157  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:50.239306  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:50.284209  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:50.609891  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:50.738000  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:50.740407  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:50.784575  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:51.110673  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:51.238007  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:51.238208  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:51.285028  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:51.298919  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:51.610217  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:51.739712  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:51.739762  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:51.784903  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:52.110076  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:52.238417  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:52.238588  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:52.284219  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:52.610748  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:52.738154  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:52.738417  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:52.784101  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:53.110443  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:53.238554  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:53.238689  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:53.284505  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:53.609566  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:53.739368  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:53.740159  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:53.784231  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:53.801528  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:54.110325  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:54.238645  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:54.238830  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:54.284657  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:54.610329  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:54.738706  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:54.739820  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:54.785075  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:55.109972  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:55.238413  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:55.238628  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:55.284234  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:55.610133  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:55.738526  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:55.740818  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:55.784682  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:56.110077  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:56.238205  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:56.238495  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:56.284432  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:56.298265  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:56.609532  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:56.738095  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:56.739268  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:56.783987  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:57.110330  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:57.238754  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:57.239055  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:57.284838  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:57.609504  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:57.739743  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:57.739819  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:57.784606  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:58.109648  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:58.238702  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:58.238963  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:58.284578  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:58.298556  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:58.609807  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:58.738999  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:58.739134  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:58.784007  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:59.109757  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:59.238170  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:59.238379  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:59.284583  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:59.609479  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:59.739736  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:59.739824  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:59.784585  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:00.111037  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:00.285201  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:00.285419  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:00.297334  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:58:00.315547  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:58:00.610046  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:00.738235  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:00.738664  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:00.784438  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:01.109171  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:01.238437  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:01.238685  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:01.284876  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:01.610613  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:01.738705  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:01.738895  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:01.786621  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:02.148946  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:02.262634  291820 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 13:58:02.262663  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:02.262776  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:02.308831  291820 node_ready.go:49] node "addons-494116" is "Ready"
	I1121 13:58:02.308864  291820 node_ready.go:38] duration metric: took 39.01357496s for node "addons-494116" to be "Ready" ...
	I1121 13:58:02.308878  291820 api_server.go:52] waiting for apiserver process to appear ...
	I1121 13:58:02.308937  291820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 13:58:02.318544  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:02.332431  291820 api_server.go:72] duration metric: took 40.539432432s to wait for apiserver process to appear ...
	I1121 13:58:02.332461  291820 api_server.go:88] waiting for apiserver healthz status ...
	I1121 13:58:02.332483  291820 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1121 13:58:02.344046  291820 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1121 13:58:02.345666  291820 api_server.go:141] control plane version: v1.34.1
	I1121 13:58:02.345699  291820 api_server.go:131] duration metric: took 13.230612ms to wait for apiserver health ...
	I1121 13:58:02.345709  291820 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 13:58:02.354169  291820 system_pods.go:59] 19 kube-system pods found
	I1121 13:58:02.354220  291820 system_pods.go:61] "coredns-66bc5c9577-frfnw" [6ef6e1fd-b7ba-4c77-ad3d-5bc589360cc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:58:02.354231  291820 system_pods.go:61] "csi-hostpath-attacher-0" [e0486779-420d-4eb9-bccd-3cfd26e61825] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:58:02.354242  291820 system_pods.go:61] "csi-hostpath-resizer-0" [ac63a392-f7c4-44db-a7fe-586b0d4bc265] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:58:02.354247  291820 system_pods.go:61] "csi-hostpathplugin-l2g77" [15e321e1-1e6a-4260-b28e-0d9f8af1f143] Pending
	I1121 13:58:02.354260  291820 system_pods.go:61] "etcd-addons-494116" [075ec525-a3c6-4137-aefb-3379eb8ef3c1] Running
	I1121 13:58:02.354274  291820 system_pods.go:61] "kindnet-5wkpj" [dd9b231b-1e87-4f12-a860-c02bf7976209] Running
	I1121 13:58:02.354280  291820 system_pods.go:61] "kube-apiserver-addons-494116" [5d485f91-710c-496c-b37f-7f6929814de6] Running
	I1121 13:58:02.354285  291820 system_pods.go:61] "kube-controller-manager-addons-494116" [2c1bf1fb-98e2-406c-b84d-43a747873724] Running
	I1121 13:58:02.354294  291820 system_pods.go:61] "kube-ingress-dns-minikube" [f0a8f2eb-75c6-478e-9434-463008f212b6] Pending
	I1121 13:58:02.354299  291820 system_pods.go:61] "kube-proxy-cnpzl" [bbd71e9d-2f4a-493e-80d9-47059ebffa52] Running
	I1121 13:58:02.354303  291820 system_pods.go:61] "kube-scheduler-addons-494116" [db9f8e07-36fe-4c14-9d9a-f6009b0d60d0] Running
	I1121 13:58:02.354308  291820 system_pods.go:61] "metrics-server-85b7d694d7-5ptdb" [519fa634-5010-4714-80e8-5c6021451227] Pending
	I1121 13:58:02.354314  291820 system_pods.go:61] "nvidia-device-plugin-daemonset-tkkkl" [8f752345-52f8-4288-8728-33e535a60746] Pending
	I1121 13:58:02.354319  291820 system_pods.go:61] "registry-6b586f9694-cvgwr" [d4804f42-0759-4095-942d-fd20e6892955] Pending
	I1121 13:58:02.354327  291820 system_pods.go:61] "registry-creds-764b6fb674-sl95w" [b6f343b9-9d5d-4236-a7b8-a958f297db46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:58:02.354339  291820 system_pods.go:61] "registry-proxy-mlm5l" [cc2fce13-0044-46ba-9760-4efa6201f3f3] Pending
	I1121 13:58:02.354352  291820 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jrcpn" [e4aa5f65-e65c-4b8f-b775-80cf9ef5f801] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.354362  291820 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vckmf" [53b85d18-2d86-4032-8f17-7be89eaa9beb] Pending
	I1121 13:58:02.354368  291820 system_pods.go:61] "storage-provisioner" [d930299d-8e9d-4e9a-907a-15d7167e4f56] Pending
	I1121 13:58:02.354378  291820 system_pods.go:74] duration metric: took 8.663643ms to wait for pod list to return data ...
	I1121 13:58:02.354386  291820 default_sa.go:34] waiting for default service account to be created ...
	I1121 13:58:02.378988  291820 default_sa.go:45] found service account: "default"
	I1121 13:58:02.379026  291820 default_sa.go:55] duration metric: took 24.618259ms for default service account to be created ...
	I1121 13:58:02.379038  291820 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 13:58:02.408073  291820 system_pods.go:86] 19 kube-system pods found
	I1121 13:58:02.408117  291820 system_pods.go:89] "coredns-66bc5c9577-frfnw" [6ef6e1fd-b7ba-4c77-ad3d-5bc589360cc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:58:02.408127  291820 system_pods.go:89] "csi-hostpath-attacher-0" [e0486779-420d-4eb9-bccd-3cfd26e61825] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:58:02.408135  291820 system_pods.go:89] "csi-hostpath-resizer-0" [ac63a392-f7c4-44db-a7fe-586b0d4bc265] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:58:02.408139  291820 system_pods.go:89] "csi-hostpathplugin-l2g77" [15e321e1-1e6a-4260-b28e-0d9f8af1f143] Pending
	I1121 13:58:02.408143  291820 system_pods.go:89] "etcd-addons-494116" [075ec525-a3c6-4137-aefb-3379eb8ef3c1] Running
	I1121 13:58:02.408148  291820 system_pods.go:89] "kindnet-5wkpj" [dd9b231b-1e87-4f12-a860-c02bf7976209] Running
	I1121 13:58:02.408152  291820 system_pods.go:89] "kube-apiserver-addons-494116" [5d485f91-710c-496c-b37f-7f6929814de6] Running
	I1121 13:58:02.408156  291820 system_pods.go:89] "kube-controller-manager-addons-494116" [2c1bf1fb-98e2-406c-b84d-43a747873724] Running
	I1121 13:58:02.408161  291820 system_pods.go:89] "kube-ingress-dns-minikube" [f0a8f2eb-75c6-478e-9434-463008f212b6] Pending
	I1121 13:58:02.408166  291820 system_pods.go:89] "kube-proxy-cnpzl" [bbd71e9d-2f4a-493e-80d9-47059ebffa52] Running
	I1121 13:58:02.408171  291820 system_pods.go:89] "kube-scheduler-addons-494116" [db9f8e07-36fe-4c14-9d9a-f6009b0d60d0] Running
	I1121 13:58:02.408182  291820 system_pods.go:89] "metrics-server-85b7d694d7-5ptdb" [519fa634-5010-4714-80e8-5c6021451227] Pending
	I1121 13:58:02.408186  291820 system_pods.go:89] "nvidia-device-plugin-daemonset-tkkkl" [8f752345-52f8-4288-8728-33e535a60746] Pending
	I1121 13:58:02.408194  291820 system_pods.go:89] "registry-6b586f9694-cvgwr" [d4804f42-0759-4095-942d-fd20e6892955] Pending
	I1121 13:58:02.408201  291820 system_pods.go:89] "registry-creds-764b6fb674-sl95w" [b6f343b9-9d5d-4236-a7b8-a958f297db46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:58:02.408211  291820 system_pods.go:89] "registry-proxy-mlm5l" [cc2fce13-0044-46ba-9760-4efa6201f3f3] Pending
	I1121 13:58:02.408218  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jrcpn" [e4aa5f65-e65c-4b8f-b775-80cf9ef5f801] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.408231  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vckmf" [53b85d18-2d86-4032-8f17-7be89eaa9beb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.408235  291820 system_pods.go:89] "storage-provisioner" [d930299d-8e9d-4e9a-907a-15d7167e4f56] Pending
	I1121 13:58:02.408250  291820 retry.go:31] will retry after 265.505455ms: missing components: kube-dns
	I1121 13:58:02.620035  291820 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 13:58:02.620070  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:02.682447  291820 system_pods.go:86] 19 kube-system pods found
	I1121 13:58:02.682485  291820 system_pods.go:89] "coredns-66bc5c9577-frfnw" [6ef6e1fd-b7ba-4c77-ad3d-5bc589360cc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:58:02.682495  291820 system_pods.go:89] "csi-hostpath-attacher-0" [e0486779-420d-4eb9-bccd-3cfd26e61825] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:58:02.682502  291820 system_pods.go:89] "csi-hostpath-resizer-0" [ac63a392-f7c4-44db-a7fe-586b0d4bc265] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:58:02.682518  291820 system_pods.go:89] "csi-hostpathplugin-l2g77" [15e321e1-1e6a-4260-b28e-0d9f8af1f143] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:58:02.682524  291820 system_pods.go:89] "etcd-addons-494116" [075ec525-a3c6-4137-aefb-3379eb8ef3c1] Running
	I1121 13:58:02.682529  291820 system_pods.go:89] "kindnet-5wkpj" [dd9b231b-1e87-4f12-a860-c02bf7976209] Running
	I1121 13:58:02.682534  291820 system_pods.go:89] "kube-apiserver-addons-494116" [5d485f91-710c-496c-b37f-7f6929814de6] Running
	I1121 13:58:02.682545  291820 system_pods.go:89] "kube-controller-manager-addons-494116" [2c1bf1fb-98e2-406c-b84d-43a747873724] Running
	I1121 13:58:02.682551  291820 system_pods.go:89] "kube-ingress-dns-minikube" [f0a8f2eb-75c6-478e-9434-463008f212b6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:58:02.682563  291820 system_pods.go:89] "kube-proxy-cnpzl" [bbd71e9d-2f4a-493e-80d9-47059ebffa52] Running
	I1121 13:58:02.682568  291820 system_pods.go:89] "kube-scheduler-addons-494116" [db9f8e07-36fe-4c14-9d9a-f6009b0d60d0] Running
	I1121 13:58:02.682572  291820 system_pods.go:89] "metrics-server-85b7d694d7-5ptdb" [519fa634-5010-4714-80e8-5c6021451227] Pending
	I1121 13:58:02.682576  291820 system_pods.go:89] "nvidia-device-plugin-daemonset-tkkkl" [8f752345-52f8-4288-8728-33e535a60746] Pending
	I1121 13:58:02.682591  291820 system_pods.go:89] "registry-6b586f9694-cvgwr" [d4804f42-0759-4095-942d-fd20e6892955] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:58:02.682602  291820 system_pods.go:89] "registry-creds-764b6fb674-sl95w" [b6f343b9-9d5d-4236-a7b8-a958f297db46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:58:02.682608  291820 system_pods.go:89] "registry-proxy-mlm5l" [cc2fce13-0044-46ba-9760-4efa6201f3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:58:02.682617  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jrcpn" [e4aa5f65-e65c-4b8f-b775-80cf9ef5f801] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.682624  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vckmf" [53b85d18-2d86-4032-8f17-7be89eaa9beb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.682630  291820 system_pods.go:89] "storage-provisioner" [d930299d-8e9d-4e9a-907a-15d7167e4f56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:58:02.682650  291820 retry.go:31] will retry after 291.611485ms: missing components: kube-dns
	I1121 13:58:02.745621  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:02.746018  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:02.785421  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:02.980137  291820 system_pods.go:86] 19 kube-system pods found
	I1121 13:58:02.980175  291820 system_pods.go:89] "coredns-66bc5c9577-frfnw" [6ef6e1fd-b7ba-4c77-ad3d-5bc589360cc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:58:02.980193  291820 system_pods.go:89] "csi-hostpath-attacher-0" [e0486779-420d-4eb9-bccd-3cfd26e61825] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:58:02.980203  291820 system_pods.go:89] "csi-hostpath-resizer-0" [ac63a392-f7c4-44db-a7fe-586b0d4bc265] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:58:02.980215  291820 system_pods.go:89] "csi-hostpathplugin-l2g77" [15e321e1-1e6a-4260-b28e-0d9f8af1f143] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:58:02.980224  291820 system_pods.go:89] "etcd-addons-494116" [075ec525-a3c6-4137-aefb-3379eb8ef3c1] Running
	I1121 13:58:02.980230  291820 system_pods.go:89] "kindnet-5wkpj" [dd9b231b-1e87-4f12-a860-c02bf7976209] Running
	I1121 13:58:02.980241  291820 system_pods.go:89] "kube-apiserver-addons-494116" [5d485f91-710c-496c-b37f-7f6929814de6] Running
	I1121 13:58:02.980246  291820 system_pods.go:89] "kube-controller-manager-addons-494116" [2c1bf1fb-98e2-406c-b84d-43a747873724] Running
	I1121 13:58:02.980253  291820 system_pods.go:89] "kube-ingress-dns-minikube" [f0a8f2eb-75c6-478e-9434-463008f212b6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:58:02.980269  291820 system_pods.go:89] "kube-proxy-cnpzl" [bbd71e9d-2f4a-493e-80d9-47059ebffa52] Running
	I1121 13:58:02.980275  291820 system_pods.go:89] "kube-scheduler-addons-494116" [db9f8e07-36fe-4c14-9d9a-f6009b0d60d0] Running
	I1121 13:58:02.980290  291820 system_pods.go:89] "metrics-server-85b7d694d7-5ptdb" [519fa634-5010-4714-80e8-5c6021451227] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:58:02.980298  291820 system_pods.go:89] "nvidia-device-plugin-daemonset-tkkkl" [8f752345-52f8-4288-8728-33e535a60746] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:58:02.980304  291820 system_pods.go:89] "registry-6b586f9694-cvgwr" [d4804f42-0759-4095-942d-fd20e6892955] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:58:02.980314  291820 system_pods.go:89] "registry-creds-764b6fb674-sl95w" [b6f343b9-9d5d-4236-a7b8-a958f297db46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:58:02.980322  291820 system_pods.go:89] "registry-proxy-mlm5l" [cc2fce13-0044-46ba-9760-4efa6201f3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:58:02.980333  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jrcpn" [e4aa5f65-e65c-4b8f-b775-80cf9ef5f801] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.980352  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vckmf" [53b85d18-2d86-4032-8f17-7be89eaa9beb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.980358  291820 system_pods.go:89] "storage-provisioner" [d930299d-8e9d-4e9a-907a-15d7167e4f56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:58:02.980375  291820 retry.go:31] will retry after 304.560831ms: missing components: kube-dns
	I1121 13:58:03.111625  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:03.256781  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:03.258481  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:03.332042  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:03.332977  291820 system_pods.go:86] 19 kube-system pods found
	I1121 13:58:03.333009  291820 system_pods.go:89] "coredns-66bc5c9577-frfnw" [6ef6e1fd-b7ba-4c77-ad3d-5bc589360cc4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:58:03.333051  291820 system_pods.go:89] "csi-hostpath-attacher-0" [e0486779-420d-4eb9-bccd-3cfd26e61825] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:58:03.333071  291820 system_pods.go:89] "csi-hostpath-resizer-0" [ac63a392-f7c4-44db-a7fe-586b0d4bc265] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:58:03.333080  291820 system_pods.go:89] "csi-hostpathplugin-l2g77" [15e321e1-1e6a-4260-b28e-0d9f8af1f143] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:58:03.333090  291820 system_pods.go:89] "etcd-addons-494116" [075ec525-a3c6-4137-aefb-3379eb8ef3c1] Running
	I1121 13:58:03.333094  291820 system_pods.go:89] "kindnet-5wkpj" [dd9b231b-1e87-4f12-a860-c02bf7976209] Running
	I1121 13:58:03.333100  291820 system_pods.go:89] "kube-apiserver-addons-494116" [5d485f91-710c-496c-b37f-7f6929814de6] Running
	I1121 13:58:03.333117  291820 system_pods.go:89] "kube-controller-manager-addons-494116" [2c1bf1fb-98e2-406c-b84d-43a747873724] Running
	I1121 13:58:03.333123  291820 system_pods.go:89] "kube-ingress-dns-minikube" [f0a8f2eb-75c6-478e-9434-463008f212b6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:58:03.333129  291820 system_pods.go:89] "kube-proxy-cnpzl" [bbd71e9d-2f4a-493e-80d9-47059ebffa52] Running
	I1121 13:58:03.333140  291820 system_pods.go:89] "kube-scheduler-addons-494116" [db9f8e07-36fe-4c14-9d9a-f6009b0d60d0] Running
	I1121 13:58:03.333148  291820 system_pods.go:89] "metrics-server-85b7d694d7-5ptdb" [519fa634-5010-4714-80e8-5c6021451227] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:58:03.333160  291820 system_pods.go:89] "nvidia-device-plugin-daemonset-tkkkl" [8f752345-52f8-4288-8728-33e535a60746] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:58:03.333167  291820 system_pods.go:89] "registry-6b586f9694-cvgwr" [d4804f42-0759-4095-942d-fd20e6892955] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:58:03.333173  291820 system_pods.go:89] "registry-creds-764b6fb674-sl95w" [b6f343b9-9d5d-4236-a7b8-a958f297db46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:58:03.333187  291820 system_pods.go:89] "registry-proxy-mlm5l" [cc2fce13-0044-46ba-9760-4efa6201f3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:58:03.333198  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jrcpn" [e4aa5f65-e65c-4b8f-b775-80cf9ef5f801] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:03.333205  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vckmf" [53b85d18-2d86-4032-8f17-7be89eaa9beb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:03.333215  291820 system_pods.go:89] "storage-provisioner" [d930299d-8e9d-4e9a-907a-15d7167e4f56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:58:03.333225  291820 system_pods.go:126] duration metric: took 954.180104ms to wait for k8s-apps to be running ...
	I1121 13:58:03.333238  291820 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 13:58:03.333299  291820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 13:58:03.362513  291820 system_svc.go:56] duration metric: took 29.258403ms WaitForService to wait for kubelet
	I1121 13:58:03.362553  291820 kubeadm.go:587] duration metric: took 41.569558954s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 13:58:03.362571  291820 node_conditions.go:102] verifying NodePressure condition ...
	I1121 13:58:03.381953  291820 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 13:58:03.382000  291820 node_conditions.go:123] node cpu capacity is 2
	I1121 13:58:03.382014  291820 node_conditions.go:105] duration metric: took 19.437575ms to run NodePressure ...
	I1121 13:58:03.382028  291820 start.go:242] waiting for startup goroutines ...
	I1121 13:58:03.611054  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:03.739924  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:03.740329  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:03.784941  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:04.111128  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:04.239494  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:04.239634  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:04.285041  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:04.610661  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:04.742257  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:04.742422  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:04.793258  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:05.110200  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:05.240509  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:05.241076  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:05.284886  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:05.610706  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:05.737886  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:05.738086  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:05.784256  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:06.109834  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:06.240291  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:06.240783  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:06.345261  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:06.610693  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:06.738269  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:06.738443  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:06.784550  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:07.110464  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:07.239228  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:07.240482  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:07.340959  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:07.611237  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:07.739827  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:07.740056  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:07.784207  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:08.110575  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:08.239537  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:08.239976  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:08.285137  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:08.610544  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:08.738801  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:08.739081  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:08.784341  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:09.110594  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:09.238851  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:09.238958  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:09.284356  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:09.611148  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:09.739907  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:09.740986  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:09.785245  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:10.111096  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:10.239787  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:10.240128  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:10.285139  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:10.611491  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:10.739784  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:10.740270  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:10.786250  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:11.110348  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:11.239021  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:11.239225  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:11.284183  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:11.611111  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:11.740076  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:11.741238  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:11.784994  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:12.110125  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:12.240143  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:12.240527  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:12.284814  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:12.620324  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:12.741664  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:12.741762  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:12.785092  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:13.112338  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:13.245141  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:13.245647  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:13.286475  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:13.613009  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:13.741321  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:13.741732  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:13.787140  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:14.112843  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:14.243679  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:14.244042  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:14.287435  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:14.613021  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:14.744074  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:14.745064  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:14.787613  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:15.115121  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:15.241801  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:15.242927  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:15.287661  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:15.616120  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:15.741990  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:15.742403  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:15.785076  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:16.111787  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:16.242158  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:16.243490  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:16.288233  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:16.615633  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:16.742311  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:16.742460  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:16.787793  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:17.110432  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:17.240550  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:17.241726  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:17.285153  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:17.613913  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:17.738894  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:17.739820  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:17.784525  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:18.110930  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:18.238849  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:18.239516  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:18.284718  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:18.611450  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:18.739911  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:18.740120  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:18.784115  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:19.110659  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:19.238724  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:19.239486  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:19.284726  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:19.610362  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:19.738433  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:19.739142  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:19.784196  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:20.114268  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:20.240680  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:20.241150  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:20.284178  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:20.610472  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:20.740002  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:20.740286  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:20.784534  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:21.110311  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:21.239928  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:21.240043  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:21.285224  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:21.611837  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:21.739749  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:21.740478  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:21.784781  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:22.110369  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:22.239546  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:22.240649  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:22.284549  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:22.610102  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:22.741896  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:22.742244  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:22.785254  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:23.109802  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:23.239033  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:23.239620  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:23.284617  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:23.610765  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:23.739196  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:23.739404  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:23.784103  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:24.111077  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:24.240046  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:24.240465  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:24.284749  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:24.610170  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:24.739291  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:24.740273  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:24.784419  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:25.110800  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:25.238643  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:25.239340  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:25.284600  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:25.647637  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:25.739438  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:25.739554  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:25.784319  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:26.109963  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:26.239677  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:26.240358  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:26.284615  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:26.610182  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:26.741243  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:26.742380  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:26.784906  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:27.110651  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:27.239445  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:27.239818  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:27.285058  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:27.611461  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:27.740615  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:27.741040  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:27.784757  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:28.110502  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:28.239036  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:28.239149  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:28.285000  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:28.612703  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:28.737808  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:28.739084  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:28.785082  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:29.111195  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:29.240230  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:29.240407  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:29.284224  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:29.619001  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:29.739940  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:29.740078  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:29.783745  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:30.111426  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:30.238842  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:30.239785  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:30.284601  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:30.610597  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:30.739508  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:30.740658  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:30.784689  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:31.110575  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:31.239025  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:31.239203  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:31.284004  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:31.611204  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:31.739871  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:31.740231  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:31.783977  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:32.110642  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:32.238210  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:32.238337  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:32.287109  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:32.610799  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:32.740579  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:32.740888  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:32.783675  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:33.110516  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:33.238932  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:33.239081  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:33.283986  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:33.610726  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:33.738456  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:33.738851  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:33.784620  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:34.110119  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:34.238751  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:34.238850  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:34.285423  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:34.609790  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:34.738709  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:34.738878  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:34.785151  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:35.112591  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:35.242201  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:35.242506  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:35.284606  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:35.647984  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:35.754603  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:35.754866  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:35.796695  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:36.110797  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:36.238232  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:36.239701  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:36.284586  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:36.609476  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:36.738428  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:36.739436  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:36.784582  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:37.110203  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:37.240076  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:37.240473  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:37.284467  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:37.611333  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:37.739408  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:37.739709  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:37.784160  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:38.111390  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:38.239443  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:38.239757  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:38.284333  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:38.612517  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:38.738827  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:38.739071  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:38.784943  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:39.110412  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:39.238744  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:39.239087  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:39.284920  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:39.610918  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:39.739199  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:39.739320  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:39.784430  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:40.110423  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:40.240715  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:40.241327  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:40.283934  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:40.611246  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:40.740281  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:40.740550  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:40.784527  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:41.110421  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:41.239800  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:41.240211  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:41.285219  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:41.610943  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:41.738971  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:41.739115  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:41.784979  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:42.111530  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:42.238671  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:42.238846  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:42.284908  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:42.615591  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:42.739457  291820 kapi.go:107] duration metric: took 1m15.004759298s to wait for kubernetes.io/minikube-addons=registry ...
	I1121 13:58:42.739734  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:42.784663  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:43.110759  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:43.238643  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:43.285727  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:43.611828  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:43.738478  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:43.784564  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:44.113593  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:44.237803  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:44.284746  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:44.611497  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:44.738787  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:44.784904  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:45.117748  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:45.264041  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:45.286223  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:45.611514  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:45.739086  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:45.784219  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:46.110290  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:46.238358  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:46.284195  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:46.611130  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:46.738595  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:46.784147  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:47.111353  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:47.238709  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:47.284765  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:47.610998  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:47.737982  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:47.785244  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:48.110609  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:48.238832  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:48.284942  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:48.612191  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:48.743792  291820 kapi.go:107] duration metric: took 1m21.009096583s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1121 13:58:48.843552  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:49.110120  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:49.284331  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:49.610563  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:49.784185  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:50.111079  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:50.285601  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:50.615553  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:50.786329  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:51.110265  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:51.284449  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:51.610584  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:51.786902  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:52.110488  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:52.284811  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:52.610315  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:52.784722  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:53.110598  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:53.287674  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:53.610418  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:53.785108  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:54.110054  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:54.284748  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:54.610573  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:54.786498  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:55.113028  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:55.284670  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:55.611706  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:55.785036  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:56.111733  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:56.285632  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:56.611337  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:56.785298  291820 kapi.go:107] duration metric: took 1m25.504165091s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1121 13:58:56.788536  291820 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-494116 cluster.
	I1121 13:58:56.791486  291820 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1121 13:58:56.794408  291820 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1121 13:58:57.110009  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:57.610938  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:58.110311  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:58.611090  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:59.109298  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:59.610947  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:00.119986  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:00.610612  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:01.110668  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:01.627584  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:02.110279  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:02.610928  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:03.109919  291820 kapi.go:107] duration metric: took 1m35.003550632s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1121 13:59:03.112926  291820 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, default-storageclass, inspektor-gadget, registry-creds, metrics-server, ingress-dns, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1121 13:59:03.115881  291820 addons.go:530] duration metric: took 1m41.322490207s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin nvidia-device-plugin storage-provisioner default-storageclass inspektor-gadget registry-creds metrics-server ingress-dns yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1121 13:59:03.115960  291820 start.go:247] waiting for cluster config update ...
	I1121 13:59:03.115984  291820 start.go:256] writing updated cluster config ...
	I1121 13:59:03.116300  291820 ssh_runner.go:195] Run: rm -f paused
	I1121 13:59:03.121007  291820 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 13:59:03.124458  291820 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-frfnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.129443  291820 pod_ready.go:94] pod "coredns-66bc5c9577-frfnw" is "Ready"
	I1121 13:59:03.129530  291820 pod_ready.go:86] duration metric: took 5.039005ms for pod "coredns-66bc5c9577-frfnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.132117  291820 pod_ready.go:83] waiting for pod "etcd-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.137066  291820 pod_ready.go:94] pod "etcd-addons-494116" is "Ready"
	I1121 13:59:03.137098  291820 pod_ready.go:86] duration metric: took 4.951153ms for pod "etcd-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.139274  291820 pod_ready.go:83] waiting for pod "kube-apiserver-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.144201  291820 pod_ready.go:94] pod "kube-apiserver-addons-494116" is "Ready"
	I1121 13:59:03.144230  291820 pod_ready.go:86] duration metric: took 4.925339ms for pod "kube-apiserver-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.146854  291820 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.525033  291820 pod_ready.go:94] pod "kube-controller-manager-addons-494116" is "Ready"
	I1121 13:59:03.525059  291820 pod_ready.go:86] duration metric: took 378.177301ms for pod "kube-controller-manager-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.725519  291820 pod_ready.go:83] waiting for pod "kube-proxy-cnpzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:04.125107  291820 pod_ready.go:94] pod "kube-proxy-cnpzl" is "Ready"
	I1121 13:59:04.125137  291820 pod_ready.go:86] duration metric: took 399.590479ms for pod "kube-proxy-cnpzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:04.325672  291820 pod_ready.go:83] waiting for pod "kube-scheduler-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:04.725180  291820 pod_ready.go:94] pod "kube-scheduler-addons-494116" is "Ready"
	I1121 13:59:04.725212  291820 pod_ready.go:86] duration metric: took 399.506556ms for pod "kube-scheduler-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:04.725232  291820 pod_ready.go:40] duration metric: took 1.604192565s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 13:59:04.778664  291820 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 13:59:04.781930  291820 out.go:179] * Done! kubectl is now configured to use "addons-494116" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 14:01:16 addons-494116 crio[832]: time="2025-11-21T14:01:16.217365508Z" level=info msg="Removed pod sandbox: 940d4e75c0114bbb4ca8ce738a1626b8a435f18fcd3d79d465bcbfb1cf2744d5" id=8f2d1a8f-f83f-465f-8efa-0d1d1d0b9b52 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.256937478Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-8w7t5/POD" id=411a8d41-44c7-4ec9-8395-02eeef8fb188 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.257031478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.311667913Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-8w7t5 Namespace:default ID:505a5a1573f3f8f1c310e102a08383c64c1a8e653176689923858e2611505518 UID:c2fbe54c-6b2a-484f-a2f3-b0b216e69ccb NetNS:/var/run/netns/920b6395-c6e4-49ba-938c-6041abe661a9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000e2a038}] Aliases:map[]}"
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.312514211Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-8w7t5 to CNI network \"kindnet\" (type=ptp)"
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.324650151Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-8w7t5 Namespace:default ID:505a5a1573f3f8f1c310e102a08383c64c1a8e653176689923858e2611505518 UID:c2fbe54c-6b2a-484f-a2f3-b0b216e69ccb NetNS:/var/run/netns/920b6395-c6e4-49ba-938c-6041abe661a9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000e2a038}] Aliases:map[]}"
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.324883863Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-8w7t5 for CNI network kindnet (type=ptp)"
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.329087115Z" level=info msg="Ran pod sandbox 505a5a1573f3f8f1c310e102a08383c64c1a8e653176689923858e2611505518 with infra container: default/hello-world-app-5d498dc89-8w7t5/POD" id=411a8d41-44c7-4ec9-8395-02eeef8fb188 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.330492778Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1cd0568b-bc02-40f3-9e53-4f7b52680b8e name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.330664294Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=1cd0568b-bc02-40f3-9e53-4f7b52680b8e name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.330722412Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=1cd0568b-bc02-40f3-9e53-4f7b52680b8e name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.33385212Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=f2ab4a0c-cc59-4f20-aeec-a5ce864826eb name=/runtime.v1.ImageService/PullImage
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.337149595Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.949110192Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=f2ab4a0c-cc59-4f20-aeec-a5ce864826eb name=/runtime.v1.ImageService/PullImage
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.950058432Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5f616efb-858f-41eb-8113-0f46aa157e5e name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.966968329Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=edfa1951-40aa-4b3a-92ab-31d7001a1d60 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.974508162Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-8w7t5/hello-world-app" id=a796280e-ebc5-43c5-a719-d82efb56ba09 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.974775672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.98726518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.98779014Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1301b5dae1a624905f7043767dfb309005360a4c4bd449bc8bce43f47cf1af98/merged/etc/passwd: no such file or directory"
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.987919218Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1301b5dae1a624905f7043767dfb309005360a4c4bd449bc8bce43f47cf1af98/merged/etc/group: no such file or directory"
	Nov 21 14:02:03 addons-494116 crio[832]: time="2025-11-21T14:02:03.988351934Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:02:04 addons-494116 crio[832]: time="2025-11-21T14:02:04.016093042Z" level=info msg="Created container 0f3a885adbc3683bc08d2387833f0c8287392d4afeb14e088ae1e8015f3a5cfe: default/hello-world-app-5d498dc89-8w7t5/hello-world-app" id=a796280e-ebc5-43c5-a719-d82efb56ba09 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:02:04 addons-494116 crio[832]: time="2025-11-21T14:02:04.028110135Z" level=info msg="Starting container: 0f3a885adbc3683bc08d2387833f0c8287392d4afeb14e088ae1e8015f3a5cfe" id=05840ef6-3ce0-4587-bde4-e80f19672b93 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:02:04 addons-494116 crio[832]: time="2025-11-21T14:02:04.033896802Z" level=info msg="Started container" PID=6970 containerID=0f3a885adbc3683bc08d2387833f0c8287392d4afeb14e088ae1e8015f3a5cfe description=default/hello-world-app-5d498dc89-8w7t5/hello-world-app id=05840ef6-3ce0-4587-bde4-e80f19672b93 name=/runtime.v1.RuntimeService/StartContainer sandboxID=505a5a1573f3f8f1c310e102a08383c64c1a8e653176689923858e2611505518
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	0f3a885adbc36       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   505a5a1573f3f       hello-world-app-5d498dc89-8w7t5            default
	24ce955bfb9c2       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   b50eba61f6978       nginx                                      default
	48c9a7485d851       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   1701d8660feca       busybox                                    default
	e4320f5fe8895       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   2faa2d4fc3e92       csi-hostpathplugin-l2g77                   kube-system
	22aad46f46903       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   2faa2d4fc3e92       csi-hostpathplugin-l2g77                   kube-system
	34ffe03bcd4d1       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   2faa2d4fc3e92       csi-hostpathplugin-l2g77                   kube-system
	c5469a211994e       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   2faa2d4fc3e92       csi-hostpathplugin-l2g77                   kube-system
	274e8dd2dab30       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   3f040642b57a1       gcp-auth-78565c9fb4-c7vmg                  gcp-auth
	305f9dbd1c9ad       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   acf5292852d4b       gadget-mndpk                               gadget
	4298f174eb879       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   2faa2d4fc3e92       csi-hostpathplugin-l2g77                   kube-system
	ae692940a4fca       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   ddb9d715cce10       ingress-nginx-controller-6c8bf45fb-2z7nm   ingress-nginx
	1e877b39bef08       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   eaa8a939445d4       registry-proxy-mlm5l                       kube-system
	e272b0978ee11       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              patch                                    0                   fe12046627b5a       ingress-nginx-admission-patch-2v528        ingress-nginx
	5d49e8d42c411       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   200db2bc9d56a       nvidia-device-plugin-daemonset-tkkkl       kube-system
	d84d7295e480c       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   e05e7695a99f5       local-path-provisioner-648f6765c9-v9sg7    local-path-storage
	3c55ac84412c8       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   36bcec8d3db3f       snapshot-controller-7d9fbc56b8-vckmf       kube-system
	3e9d7de7df80e       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   4694b44e7f0c1       registry-6b586f9694-cvgwr                  kube-system
	15f09ce47d75a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   53ec72bd9840c       snapshot-controller-7d9fbc56b8-jrcpn       kube-system
	61d5ed18a54c6       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   bf96dc5563166       kube-ingress-dns-minikube                  kube-system
	0ac555261f857       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   643d7ec3c1de2       csi-hostpath-resizer-0                     kube-system
	f601cd1551b26       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   2faa2d4fc3e92       csi-hostpathplugin-l2g77                   kube-system
	fed54d28f384c       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   afc401bf413ae       yakd-dashboard-5ff678cb9-7w57n             yakd-dashboard
	de59a02962926       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   855cf723d691d       metrics-server-85b7d694d7-5ptdb            kube-system
	c6ab4e55a2c1d       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   9b3c478a9fb15       cloud-spanner-emulator-6f9fcf858b-rkw7z    default
	3c3896dadd82d       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   a80fa09884be1       csi-hostpath-attacher-0                    kube-system
	229383db24a6a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   4 minutes ago            Exited              create                                   0                   57fc47afa660f       ingress-nginx-admission-create-lfq45       ingress-nginx
	a443f1743ed06       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   735e255b76b8a       storage-provisioner                        kube-system
	6fa60b05394e1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   1ea1833af98f9       coredns-66bc5c9577-frfnw                   kube-system
	d401871bd196a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   7e0422275aadd       kindnet-5wkpj                              kube-system
	013fd68042616       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   21f7822690244       kube-proxy-cnpzl                           kube-system
	562af98fdae9f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             4 minutes ago            Running             kube-controller-manager                  0                   81c18f639ff6f       kube-controller-manager-addons-494116      kube-system
	870089e2cb7cf       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             4 minutes ago            Running             kube-scheduler                           0                   0817640a1717b       kube-scheduler-addons-494116               kube-system
	753f8d0dbe26a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             4 minutes ago            Running             kube-apiserver                           0                   80cfb52a4de88       kube-apiserver-addons-494116               kube-system
	1b81e66733803       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             4 minutes ago            Running             etcd                                     0                   4b082c79b9e7a       etcd-addons-494116                         kube-system
	
	
	==> coredns [6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a] <==
	[INFO] 10.244.0.17:55988 - 9123 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.005882194s
	[INFO] 10.244.0.17:55988 - 58930 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000160674s
	[INFO] 10.244.0.17:55988 - 11118 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00009422s
	[INFO] 10.244.0.17:39435 - 33042 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000192764s
	[INFO] 10.244.0.17:39435 - 32854 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095541s
	[INFO] 10.244.0.17:49808 - 52757 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126483s
	[INFO] 10.244.0.17:49808 - 52995 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094031s
	[INFO] 10.244.0.17:57414 - 17751 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000119024s
	[INFO] 10.244.0.17:57414 - 17948 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000227455s
	[INFO] 10.244.0.17:55996 - 57939 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002389127s
	[INFO] 10.244.0.17:55996 - 58367 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004000016s
	[INFO] 10.244.0.17:43890 - 61648 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000199812s
	[INFO] 10.244.0.17:43890 - 61911 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000109031s
	[INFO] 10.244.0.21:48138 - 15433 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000214245s
	[INFO] 10.244.0.21:54092 - 17955 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000186725s
	[INFO] 10.244.0.21:57929 - 20022 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00025647s
	[INFO] 10.244.0.21:45880 - 16769 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000272158s
	[INFO] 10.244.0.21:41860 - 28684 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000468269s
	[INFO] 10.244.0.21:58165 - 56316 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000644796s
	[INFO] 10.244.0.21:58383 - 32930 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002938299s
	[INFO] 10.244.0.21:44814 - 20389 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00341502s
	[INFO] 10.244.0.21:37526 - 61926 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002089752s
	[INFO] 10.244.0.21:50522 - 15444 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001688125s
	[INFO] 10.244.0.23:50183 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000185618s
	[INFO] 10.244.0.23:38974 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141605s
	
	
	==> describe nodes <==
	Name:               addons-494116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-494116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=addons-494116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T13_57_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-494116
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-494116"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 13:57:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-494116
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:02:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:01:20 +0000   Fri, 21 Nov 2025 13:57:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:01:20 +0000   Fri, 21 Nov 2025 13:57:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:01:20 +0000   Fri, 21 Nov 2025 13:57:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:01:20 +0000   Fri, 21 Nov 2025 13:58:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-494116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                c3d2669c-a077-4dc1-a6d1-95f3950011ce
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     cloud-spanner-emulator-6f9fcf858b-rkw7z     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  default                     hello-world-app-5d498dc89-8w7t5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-mndpk                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  gcp-auth                    gcp-auth-78565c9fb4-c7vmg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-2z7nm    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m38s
	  kube-system                 coredns-66bc5c9577-frfnw                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m44s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 csi-hostpathplugin-l2g77                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 etcd-addons-494116                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m49s
	  kube-system                 kindnet-5wkpj                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m44s
	  kube-system                 kube-apiserver-addons-494116                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-controller-manager-addons-494116       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-proxy-cnpzl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-scheduler-addons-494116                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 metrics-server-85b7d694d7-5ptdb             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m40s
	  kube-system                 nvidia-device-plugin-daemonset-tkkkl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 registry-6b586f9694-cvgwr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 registry-creds-764b6fb674-sl95w             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 registry-proxy-mlm5l                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 snapshot-controller-7d9fbc56b8-jrcpn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 snapshot-controller-7d9fbc56b8-vckmf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  local-path-storage          local-path-provisioner-648f6765c9-v9sg7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-7w57n              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m43s  kube-proxy       
	  Normal   Starting                 4m50s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m50s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m49s  kubelet          Node addons-494116 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m49s  kubelet          Node addons-494116 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m49s  kubelet          Node addons-494116 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m45s  node-controller  Node addons-494116 event: Registered Node addons-494116 in Controller
	  Normal   NodeReady                4m3s   kubelet          Node addons-494116 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015310] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503949] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032916] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.894651] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.192036] kauditd_printk_skb: 36 callbacks suppressed
	[Nov21 12:49] hrtimer: interrupt took 26907018 ns
	[Nov21 13:55] kauditd_printk_skb: 8 callbacks suppressed
	[Nov21 13:57] overlayfs: idmapped layers are currently not supported
	[  +0.074753] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7] <==
	{"level":"warn","ts":"2025-11-21T13:57:12.136200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.151377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.166507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.184953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.221901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.231437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.248134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.269092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.304215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.310260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.326801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.368219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.391268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.407731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.424597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.457500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.470477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.507274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.561039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:28.308046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:28.328779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:50.391777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:50.406623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:50.442223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:50.449183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48232","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [274e8dd2dab304ec0d549c501c66439bc79c49a303e2f1f5be056305820a80c8] <==
	2025/11/21 13:58:56 GCP Auth Webhook started!
	2025/11/21 13:59:05 Ready to marshal response ...
	2025/11/21 13:59:05 Ready to write response ...
	2025/11/21 13:59:05 Ready to marshal response ...
	2025/11/21 13:59:05 Ready to write response ...
	2025/11/21 13:59:05 Ready to marshal response ...
	2025/11/21 13:59:05 Ready to write response ...
	2025/11/21 13:59:25 Ready to marshal response ...
	2025/11/21 13:59:25 Ready to write response ...
	2025/11/21 13:59:30 Ready to marshal response ...
	2025/11/21 13:59:30 Ready to write response ...
	2025/11/21 13:59:30 Ready to marshal response ...
	2025/11/21 13:59:30 Ready to write response ...
	2025/11/21 13:59:39 Ready to marshal response ...
	2025/11/21 13:59:39 Ready to write response ...
	2025/11/21 13:59:41 Ready to marshal response ...
	2025/11/21 13:59:41 Ready to write response ...
	2025/11/21 13:59:51 Ready to marshal response ...
	2025/11/21 13:59:51 Ready to write response ...
	2025/11/21 14:00:11 Ready to marshal response ...
	2025/11/21 14:00:11 Ready to write response ...
	2025/11/21 14:02:02 Ready to marshal response ...
	2025/11/21 14:02:02 Ready to write response ...
	
	
	==> kernel <==
	 14:02:05 up  1:44,  0 user,  load average: 0.71, 1.27, 2.11
	Linux addons-494116 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f] <==
	I1121 14:00:01.804673       1 main.go:301] handling current node
	I1121 14:00:11.802169       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:00:11.802302       1 main.go:301] handling current node
	I1121 14:00:21.803696       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:00:21.803817       1 main.go:301] handling current node
	I1121 14:00:31.801683       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:00:31.801717       1 main.go:301] handling current node
	I1121 14:00:41.801822       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:00:41.801853       1 main.go:301] handling current node
	I1121 14:00:51.808761       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:00:51.808802       1 main.go:301] handling current node
	I1121 14:01:01.802457       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:01:01.802517       1 main.go:301] handling current node
	I1121 14:01:11.801684       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:01:11.801807       1 main.go:301] handling current node
	I1121 14:01:21.802109       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:01:21.802237       1 main.go:301] handling current node
	I1121 14:01:31.805497       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:01:31.805535       1 main.go:301] handling current node
	I1121 14:01:41.802332       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:01:41.802368       1 main.go:301] handling current node
	I1121 14:01:51.802129       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:01:51.802164       1 main.go:301] handling current node
	I1121 14:02:01.804514       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:02:01.804546       1 main.go:301] handling current node
	
	
	==> kube-apiserver [753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30] <==
	W1121 13:57:28.307613       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 13:57:28.323028       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1121 13:57:31.135366       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.136.6"}
	W1121 13:57:50.391540       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 13:57:50.406022       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 13:57:50.428288       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1121 13:57:50.447210       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1121 13:58:02.071259       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.136.6:443: connect: connection refused
	E1121 13:58:02.071309       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.136.6:443: connect: connection refused" logger="UnhandledError"
	W1121 13:58:02.071547       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.136.6:443: connect: connection refused
	E1121 13:58:02.071621       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.136.6:443: connect: connection refused" logger="UnhandledError"
	W1121 13:58:02.179090       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.136.6:443: connect: connection refused
	E1121 13:58:02.182647       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.136.6:443: connect: connection refused" logger="UnhandledError"
	E1121 13:58:25.446326       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.206.105:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.206.105:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.206.105:443: connect: connection refused" logger="UnhandledError"
	W1121 13:58:25.447918       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 13:58:25.449958       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1121 13:58:25.491598       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1121 13:59:14.730415       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45208: use of closed network connection
	I1121 13:59:41.568733       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1121 13:59:41.876016       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.95.162"}
	I1121 14:00:01.816288       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1121 14:02:03.150439       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.28.136"}
	
	
	==> kube-controller-manager [562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8] <==
	I1121 13:57:20.400056       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 13:57:20.408227       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-494116" podCIDRs=["10.244.0.0/24"]
	I1121 13:57:20.412700       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 13:57:20.412721       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 13:57:20.412729       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 13:57:20.413772       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 13:57:20.413827       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 13:57:20.414507       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 13:57:20.414664       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 13:57:20.419609       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 13:57:20.419673       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 13:57:20.420859       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 13:57:20.426152       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 13:57:20.429286       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	E1121 13:57:25.968373       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1121 13:57:50.380836       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1121 13:57:50.384895       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	E1121 13:57:50.433879       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 13:57:50.434056       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1121 13:57:50.434128       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1121 13:57:50.485349       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 13:57:50.535201       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 13:58:05.391458       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1121 13:58:20.494473       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1121 13:58:20.539515       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	
	
	==> kube-proxy [013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a] <==
	I1121 13:57:21.563156       1 server_linux.go:53] "Using iptables proxy"
	I1121 13:57:21.633773       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 13:57:21.734866       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 13:57:21.734903       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 13:57:21.735005       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 13:57:21.759019       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 13:57:21.759132       1 server_linux.go:132] "Using iptables Proxier"
	I1121 13:57:21.763104       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 13:57:21.763439       1 server.go:527] "Version info" version="v1.34.1"
	I1121 13:57:21.763508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 13:57:21.773217       1 config.go:106] "Starting endpoint slice config controller"
	I1121 13:57:21.773239       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 13:57:21.773552       1 config.go:200] "Starting service config controller"
	I1121 13:57:21.773566       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 13:57:21.773871       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 13:57:21.773885       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 13:57:21.774315       1 config.go:309] "Starting node config controller"
	I1121 13:57:21.774329       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 13:57:21.774336       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 13:57:21.875359       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 13:57:21.875499       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 13:57:21.875768       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef] <==
	E1121 13:57:13.354191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 13:57:13.354233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 13:57:13.354297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 13:57:13.354351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 13:57:13.354400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 13:57:13.358070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 13:57:13.358180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 13:57:13.358507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 13:57:13.358596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 13:57:13.358687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 13:57:13.358786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 13:57:13.358925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 13:57:13.359118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 13:57:13.359293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 13:57:13.359425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 13:57:13.359599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 13:57:14.230266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 13:57:14.263257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 13:57:14.324379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 13:57:14.333946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 13:57:14.415010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 13:57:14.481481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 13:57:14.545592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 13:57:14.549067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1121 13:57:16.435943       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:00:16 addons-494116 kubelet[1273]: I1121 14:00:16.127294    1273 scope.go:117] "RemoveContainer" containerID="e9a3f018ea34d0e476b0c997a91d17d7e854de437b5304c7216c490594d1f401"
	Nov 21 14:00:16 addons-494116 kubelet[1273]: E1121 14:00:16.159441    1273 manager.go:1116] Failed to create existing container: /crio/crio-e19a12a06134d9f9fc8c0a2ca04de776574860336f25c8c05623a27684ff683b: Error finding container e19a12a06134d9f9fc8c0a2ca04de776574860336f25c8c05623a27684ff683b: Status 404 returned error can't find the container with id e19a12a06134d9f9fc8c0a2ca04de776574860336f25c8c05623a27684ff683b
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.039934    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6a2328cc-d14a-4e1f-8914-fe0de1504964-gcp-creds\") pod \"6a2328cc-d14a-4e1f-8914-fe0de1504964\" (UID: \"6a2328cc-d14a-4e1f-8914-fe0de1504964\") "
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.040111    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^650715cc-c6e2-11f0-99a1-1a99f2ecd7ad\") pod \"6a2328cc-d14a-4e1f-8914-fe0de1504964\" (UID: \"6a2328cc-d14a-4e1f-8914-fe0de1504964\") "
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.040162    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b7dg\" (UniqueName: \"kubernetes.io/projected/6a2328cc-d14a-4e1f-8914-fe0de1504964-kube-api-access-9b7dg\") pod \"6a2328cc-d14a-4e1f-8914-fe0de1504964\" (UID: \"6a2328cc-d14a-4e1f-8914-fe0de1504964\") "
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.040582    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a2328cc-d14a-4e1f-8914-fe0de1504964-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "6a2328cc-d14a-4e1f-8914-fe0de1504964" (UID: "6a2328cc-d14a-4e1f-8914-fe0de1504964"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.044632    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a2328cc-d14a-4e1f-8914-fe0de1504964-kube-api-access-9b7dg" (OuterVolumeSpecName: "kube-api-access-9b7dg") pod "6a2328cc-d14a-4e1f-8914-fe0de1504964" (UID: "6a2328cc-d14a-4e1f-8914-fe0de1504964"). InnerVolumeSpecName "kube-api-access-9b7dg". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.047561    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^650715cc-c6e2-11f0-99a1-1a99f2ecd7ad" (OuterVolumeSpecName: "task-pv-storage") pod "6a2328cc-d14a-4e1f-8914-fe0de1504964" (UID: "6a2328cc-d14a-4e1f-8914-fe0de1504964"). InnerVolumeSpecName "pvc-facc20b4-10c3-4b1d-b08a-c7965d054ca7". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.095181    1273 scope.go:117] "RemoveContainer" containerID="ca091a95754e910c964a7779c4c7253b1bdfe583cfc553b77d6cd3184dba57ba"
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.103737    1273 scope.go:117] "RemoveContainer" containerID="ca091a95754e910c964a7779c4c7253b1bdfe583cfc553b77d6cd3184dba57ba"
	Nov 21 14:00:19 addons-494116 kubelet[1273]: E1121 14:00:19.104342    1273 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca091a95754e910c964a7779c4c7253b1bdfe583cfc553b77d6cd3184dba57ba\": container with ID starting with ca091a95754e910c964a7779c4c7253b1bdfe583cfc553b77d6cd3184dba57ba not found: ID does not exist" containerID="ca091a95754e910c964a7779c4c7253b1bdfe583cfc553b77d6cd3184dba57ba"
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.104533    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca091a95754e910c964a7779c4c7253b1bdfe583cfc553b77d6cd3184dba57ba"} err="failed to get container status \"ca091a95754e910c964a7779c4c7253b1bdfe583cfc553b77d6cd3184dba57ba\": rpc error: code = NotFound desc = could not find container \"ca091a95754e910c964a7779c4c7253b1bdfe583cfc553b77d6cd3184dba57ba\": container with ID starting with ca091a95754e910c964a7779c4c7253b1bdfe583cfc553b77d6cd3184dba57ba not found: ID does not exist"
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.140809    1273 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6a2328cc-d14a-4e1f-8914-fe0de1504964-gcp-creds\") on node \"addons-494116\" DevicePath \"\""
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.140875    1273 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-facc20b4-10c3-4b1d-b08a-c7965d054ca7\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^650715cc-c6e2-11f0-99a1-1a99f2ecd7ad\") on node \"addons-494116\" "
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.140889    1273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9b7dg\" (UniqueName: \"kubernetes.io/projected/6a2328cc-d14a-4e1f-8914-fe0de1504964-kube-api-access-9b7dg\") on node \"addons-494116\" DevicePath \"\""
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.146972    1273 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-facc20b4-10c3-4b1d-b08a-c7965d054ca7" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^650715cc-c6e2-11f0-99a1-1a99f2ecd7ad") on node "addons-494116"
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.241805    1273 reconciler_common.go:299] "Volume detached for volume \"pvc-facc20b4-10c3-4b1d-b08a-c7965d054ca7\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^650715cc-c6e2-11f0-99a1-1a99f2ecd7ad\") on node \"addons-494116\" DevicePath \"\""
	Nov 21 14:00:19 addons-494116 kubelet[1273]: I1121 14:00:19.945732    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a2328cc-d14a-4e1f-8914-fe0de1504964" path="/var/lib/kubelet/pods/6a2328cc-d14a-4e1f-8914-fe0de1504964/volumes"
	Nov 21 14:01:15 addons-494116 kubelet[1273]: I1121 14:01:15.944795    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-tkkkl" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 14:01:22 addons-494116 kubelet[1273]: I1121 14:01:22.942201    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-cvgwr" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 14:01:23 addons-494116 kubelet[1273]: I1121 14:01:23.943327    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mlm5l" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 14:02:03 addons-494116 kubelet[1273]: I1121 14:02:03.046151    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c2fbe54c-6b2a-484f-a2f3-b0b216e69ccb-gcp-creds\") pod \"hello-world-app-5d498dc89-8w7t5\" (UID: \"c2fbe54c-6b2a-484f-a2f3-b0b216e69ccb\") " pod="default/hello-world-app-5d498dc89-8w7t5"
	Nov 21 14:02:03 addons-494116 kubelet[1273]: I1121 14:02:03.047020    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfklp\" (UniqueName: \"kubernetes.io/projected/c2fbe54c-6b2a-484f-a2f3-b0b216e69ccb-kube-api-access-lfklp\") pod \"hello-world-app-5d498dc89-8w7t5\" (UID: \"c2fbe54c-6b2a-484f-a2f3-b0b216e69ccb\") " pod="default/hello-world-app-5d498dc89-8w7t5"
	Nov 21 14:02:03 addons-494116 kubelet[1273]: W1121 14:02:03.327843    1273 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76/crio-505a5a1573f3f8f1c310e102a08383c64c1a8e653176689923858e2611505518 WatchSource:0}: Error finding container 505a5a1573f3f8f1c310e102a08383c64c1a8e653176689923858e2611505518: Status 404 returned error can't find the container with id 505a5a1573f3f8f1c310e102a08383c64c1a8e653176689923858e2611505518
	Nov 21 14:02:04 addons-494116 kubelet[1273]: I1121 14:02:04.491835    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-8w7t5" podStartSLOduration=1.869302993 podStartE2EDuration="2.491816895s" podCreationTimestamp="2025-11-21 14:02:02 +0000 UTC" firstStartedPulling="2025-11-21 14:02:03.330997167 +0000 UTC m=+287.527584190" lastFinishedPulling="2025-11-21 14:02:03.953511061 +0000 UTC m=+288.150098092" observedRunningTime="2025-11-21 14:02:04.491017598 +0000 UTC m=+288.687604621" watchObservedRunningTime="2025-11-21 14:02:04.491816895 +0000 UTC m=+288.688403918"
	
	
	==> storage-provisioner [a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348] <==
	W1121 14:01:40.863709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:42.866871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:42.871872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:44.875738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:44.880615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:46.883904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:46.890978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:48.894180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:48.898874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:50.902205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:50.907163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:52.910153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:52.917161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:54.922221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:54.926937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:56.931310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:56.937893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:58.940719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:01:58.945415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:02:00.948195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:02:00.952837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:02:02.998613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:02:03.009990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:02:05.016642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:02:05.024620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-494116 -n addons-494116
helpers_test.go:269: (dbg) Run:  kubectl --context addons-494116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-lfq45 ingress-nginx-admission-patch-2v528 registry-creds-764b6fb674-sl95w
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-494116 describe pod ingress-nginx-admission-create-lfq45 ingress-nginx-admission-patch-2v528 registry-creds-764b6fb674-sl95w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-494116 describe pod ingress-nginx-admission-create-lfq45 ingress-nginx-admission-patch-2v528 registry-creds-764b6fb674-sl95w: exit status 1 (137.747712ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lfq45" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2v528" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-sl95w" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-494116 describe pod ingress-nginx-admission-create-lfq45 ingress-nginx-admission-patch-2v528 registry-creds-764b6fb674-sl95w: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (336.172353ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:02:06.346424  301177 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:02:06.347458  301177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:02:06.347508  301177 out.go:374] Setting ErrFile to fd 2...
	I1121 14:02:06.347529  301177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:02:06.347820  301177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:02:06.348167  301177 mustload.go:66] Loading cluster: addons-494116
	I1121 14:02:06.348639  301177 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:02:06.348682  301177 addons.go:622] checking whether the cluster is paused
	I1121 14:02:06.348827  301177 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:02:06.348859  301177 host.go:66] Checking if "addons-494116" exists ...
	I1121 14:02:06.349390  301177 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 14:02:06.382120  301177 ssh_runner.go:195] Run: systemctl --version
	I1121 14:02:06.382190  301177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 14:02:06.418861  301177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 14:02:06.531219  301177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:02:06.531294  301177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:02:06.566239  301177 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 14:02:06.566258  301177 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 14:02:06.566263  301177 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 14:02:06.566267  301177 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 14:02:06.566279  301177 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 14:02:06.566283  301177 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 14:02:06.566286  301177 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 14:02:06.566290  301177 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 14:02:06.566293  301177 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 14:02:06.566299  301177 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 14:02:06.566303  301177 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 14:02:06.566306  301177 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 14:02:06.566310  301177 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 14:02:06.566313  301177 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 14:02:06.566316  301177 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 14:02:06.566321  301177 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 14:02:06.566325  301177 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 14:02:06.566329  301177 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 14:02:06.566332  301177 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 14:02:06.566336  301177 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 14:02:06.566341  301177 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 14:02:06.566344  301177 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 14:02:06.566347  301177 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 14:02:06.566350  301177 cri.go:89] found id: ""
	I1121 14:02:06.566401  301177 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:02:06.582758  301177 out.go:203] 
	W1121 14:02:06.585842  301177 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:02:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:02:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:02:06.585871  301177 out.go:285] * 
	* 
	W1121 14:02:06.591075  301177 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:02:06.593993  301177 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable ingress --alsologtostderr -v=1: exit status 11 (258.87365ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:02:06.649630  301301 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:02:06.650367  301301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:02:06.650385  301301 out.go:374] Setting ErrFile to fd 2...
	I1121 14:02:06.650391  301301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:02:06.650703  301301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:02:06.651036  301301 mustload.go:66] Loading cluster: addons-494116
	I1121 14:02:06.651456  301301 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:02:06.651477  301301 addons.go:622] checking whether the cluster is paused
	I1121 14:02:06.651617  301301 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:02:06.651638  301301 host.go:66] Checking if "addons-494116" exists ...
	I1121 14:02:06.652133  301301 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 14:02:06.669259  301301 ssh_runner.go:195] Run: systemctl --version
	I1121 14:02:06.669318  301301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 14:02:06.685951  301301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 14:02:06.791143  301301 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:02:06.791227  301301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:02:06.822881  301301 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 14:02:06.822901  301301 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 14:02:06.822914  301301 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 14:02:06.822918  301301 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 14:02:06.822921  301301 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 14:02:06.822925  301301 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 14:02:06.822928  301301 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 14:02:06.822931  301301 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 14:02:06.822934  301301 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 14:02:06.822940  301301 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 14:02:06.822944  301301 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 14:02:06.822957  301301 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 14:02:06.822960  301301 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 14:02:06.822963  301301 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 14:02:06.822967  301301 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 14:02:06.822972  301301 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 14:02:06.822975  301301 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 14:02:06.822979  301301 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 14:02:06.822982  301301 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 14:02:06.822985  301301 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 14:02:06.822989  301301 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 14:02:06.822992  301301 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 14:02:06.822995  301301 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 14:02:06.822998  301301 cri.go:89] found id: ""
	I1121 14:02:06.823053  301301 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:02:06.843579  301301 out.go:203] 
	W1121 14:02:06.846585  301301 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:02:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:02:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:02:06.846606  301301 out.go:285] * 
	* 
	W1121 14:02:06.851629  301301 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:02:06.854731  301301 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-mndpk" [0dc399d7-3a02-4139-8f6f-85449232ee96] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005591635s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (323.589498ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:00:25.166949  300157 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:00:25.167933  300157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:00:25.167978  300157 out.go:374] Setting ErrFile to fd 2...
	I1121 14:00:25.168002  300157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:00:25.168359  300157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:00:25.168794  300157 mustload.go:66] Loading cluster: addons-494116
	I1121 14:00:25.169240  300157 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:00:25.169286  300157 addons.go:622] checking whether the cluster is paused
	I1121 14:00:25.169424  300157 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:00:25.169463  300157 host.go:66] Checking if "addons-494116" exists ...
	I1121 14:00:25.170069  300157 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 14:00:25.192774  300157 ssh_runner.go:195] Run: systemctl --version
	I1121 14:00:25.192832  300157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 14:00:25.216748  300157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 14:00:25.324777  300157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:00:25.324880  300157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:00:25.389137  300157 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 14:00:25.389166  300157 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 14:00:25.389171  300157 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 14:00:25.389175  300157 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 14:00:25.389179  300157 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 14:00:25.389182  300157 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 14:00:25.389185  300157 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 14:00:25.389189  300157 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 14:00:25.389192  300157 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 14:00:25.389203  300157 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 14:00:25.389206  300157 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 14:00:25.389210  300157 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 14:00:25.389213  300157 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 14:00:25.389216  300157 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 14:00:25.389219  300157 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 14:00:25.389226  300157 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 14:00:25.389230  300157 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 14:00:25.389235  300157 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 14:00:25.389251  300157 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 14:00:25.389255  300157 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 14:00:25.389260  300157 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 14:00:25.389263  300157 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 14:00:25.389266  300157 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 14:00:25.389269  300157 cri.go:89] found id: ""
	I1121 14:00:25.389324  300157 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:00:25.416610  300157 out.go:203] 
	W1121 14:00:25.422418  300157 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:00:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:00:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:00:25.422459  300157 out.go:285] * 
	* 
	W1121 14:00:25.427499  300157 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:00:25.431490  300157 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.656324ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-5ptdb" [519fa634-5010-4714-80e8-5c6021451227] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003751079s
addons_test.go:463: (dbg) Run:  kubectl --context addons-494116 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (307.960682ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:59:40.990136  299064 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:59:40.990950  299064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:40.990979  299064 out.go:374] Setting ErrFile to fd 2...
	I1121 13:59:40.990987  299064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:40.991428  299064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 13:59:40.991928  299064 mustload.go:66] Loading cluster: addons-494116
	I1121 13:59:40.992550  299064 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:40.992577  299064 addons.go:622] checking whether the cluster is paused
	I1121 13:59:40.992763  299064 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:40.992811  299064 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:59:40.993714  299064 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:59:41.013059  299064 ssh_runner.go:195] Run: systemctl --version
	I1121 13:59:41.013123  299064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:59:41.030768  299064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:59:41.141841  299064 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:59:41.141928  299064 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:59:41.204189  299064 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 13:59:41.204208  299064 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 13:59:41.204213  299064 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 13:59:41.204217  299064 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 13:59:41.204220  299064 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 13:59:41.204224  299064 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 13:59:41.204232  299064 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 13:59:41.204235  299064 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 13:59:41.204239  299064 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 13:59:41.204245  299064 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 13:59:41.204249  299064 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 13:59:41.204252  299064 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 13:59:41.204255  299064 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 13:59:41.204258  299064 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 13:59:41.204261  299064 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 13:59:41.204266  299064 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 13:59:41.204269  299064 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 13:59:41.204273  299064 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 13:59:41.204276  299064 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 13:59:41.204279  299064 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 13:59:41.204283  299064 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 13:59:41.204286  299064 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 13:59:41.204290  299064 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 13:59:41.204292  299064 cri.go:89] found id: ""
	I1121 13:59:41.204341  299064 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:59:41.226846  299064 out.go:203] 
	W1121 13:59:41.229757  299064 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:59:41.229786  299064 out.go:285] * 
	* 
	W1121 13:59:41.234803  299064 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:59:41.237891  299064 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.42s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1121 13:59:39.843122  291060 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1121 13:59:39.854429  291060 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1121 13:59:39.854454  291060 kapi.go:107] duration metric: took 11.34672ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 11.355155ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-494116 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-494116 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [7459701e-7529-46aa-b1d7-a282b0972871] Pending
helpers_test.go:352: "task-pv-pod" [7459701e-7529-46aa-b1d7-a282b0972871] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [7459701e-7529-46aa-b1d7-a282b0972871] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.022491148s
addons_test.go:572: (dbg) Run:  kubectl --context addons-494116 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-494116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-494116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-494116 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-494116 delete pod task-pv-pod: (1.115279512s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-494116 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-494116 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-494116 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [6a2328cc-d14a-4e1f-8914-fe0de1504964] Pending
helpers_test.go:352: "task-pv-pod-restore" [6a2328cc-d14a-4e1f-8914-fe0de1504964] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [6a2328cc-d14a-4e1f-8914-fe0de1504964] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003442312s
addons_test.go:614: (dbg) Run:  kubectl --context addons-494116 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-494116 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-494116 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (266.312914ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:00:19.578735  300049 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:00:19.579674  300049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:00:19.579695  300049 out.go:374] Setting ErrFile to fd 2...
	I1121 14:00:19.579702  300049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:00:19.580146  300049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:00:19.580618  300049 mustload.go:66] Loading cluster: addons-494116
	I1121 14:00:19.581102  300049 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:00:19.581121  300049 addons.go:622] checking whether the cluster is paused
	I1121 14:00:19.581249  300049 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:00:19.581265  300049 host.go:66] Checking if "addons-494116" exists ...
	I1121 14:00:19.581847  300049 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 14:00:19.602129  300049 ssh_runner.go:195] Run: systemctl --version
	I1121 14:00:19.602180  300049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 14:00:19.621048  300049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 14:00:19.727305  300049 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:00:19.727406  300049 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:00:19.759633  300049 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 14:00:19.759668  300049 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 14:00:19.759674  300049 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 14:00:19.759679  300049 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 14:00:19.759683  300049 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 14:00:19.759687  300049 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 14:00:19.759691  300049 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 14:00:19.759694  300049 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 14:00:19.759698  300049 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 14:00:19.759709  300049 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 14:00:19.759715  300049 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 14:00:19.759720  300049 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 14:00:19.759726  300049 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 14:00:19.759730  300049 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 14:00:19.759734  300049 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 14:00:19.759740  300049 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 14:00:19.759746  300049 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 14:00:19.759754  300049 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 14:00:19.759757  300049 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 14:00:19.759761  300049 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 14:00:19.759765  300049 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 14:00:19.759768  300049 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 14:00:19.759771  300049 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 14:00:19.759774  300049 cri.go:89] found id: ""
	I1121 14:00:19.759829  300049 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:00:19.774833  300049 out.go:203] 
	W1121 14:00:19.777689  300049 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:00:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:00:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:00:19.777722  300049 out.go:285] * 
	* 
	W1121 14:00:19.782936  300049 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:00:19.786072  300049 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (313.190385ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:00:19.844898  300092 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:00:19.845679  300092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:00:19.845717  300092 out.go:374] Setting ErrFile to fd 2...
	I1121 14:00:19.845736  300092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:00:19.846050  300092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:00:19.846387  300092 mustload.go:66] Loading cluster: addons-494116
	I1121 14:00:19.846820  300092 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:00:19.846869  300092 addons.go:622] checking whether the cluster is paused
	I1121 14:00:19.847010  300092 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:00:19.847048  300092 host.go:66] Checking if "addons-494116" exists ...
	I1121 14:00:19.847568  300092 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 14:00:19.867311  300092 ssh_runner.go:195] Run: systemctl --version
	I1121 14:00:19.867379  300092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 14:00:19.890862  300092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 14:00:20.011457  300092 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:00:20.011583  300092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:00:20.066478  300092 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 14:00:20.066499  300092 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 14:00:20.066503  300092 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 14:00:20.066507  300092 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 14:00:20.066511  300092 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 14:00:20.066515  300092 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 14:00:20.066518  300092 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 14:00:20.066522  300092 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 14:00:20.066526  300092 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 14:00:20.066533  300092 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 14:00:20.066537  300092 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 14:00:20.066540  300092 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 14:00:20.066543  300092 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 14:00:20.066546  300092 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 14:00:20.066549  300092 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 14:00:20.066558  300092 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 14:00:20.066561  300092 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 14:00:20.066566  300092 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 14:00:20.066569  300092 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 14:00:20.066572  300092 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 14:00:20.066577  300092 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 14:00:20.066580  300092 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 14:00:20.066584  300092 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 14:00:20.066587  300092 cri.go:89] found id: ""
	I1121 14:00:20.066642  300092 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:00:20.089739  300092 out.go:203] 
	W1121 14:00:20.092852  300092 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:00:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:00:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:00:20.092886  300092 out.go:285] * 
	* 
	W1121 14:00:20.097889  300092 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:00:20.100977  300092 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (40.27s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-494116 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-494116 --alsologtostderr -v=1: exit status 11 (262.279885ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:59:15.456184  297862 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:59:15.457018  297862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:15.457032  297862 out.go:374] Setting ErrFile to fd 2...
	I1121 13:59:15.457037  297862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:15.457307  297862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 13:59:15.457622  297862 mustload.go:66] Loading cluster: addons-494116
	I1121 13:59:15.458004  297862 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:15.458021  297862 addons.go:622] checking whether the cluster is paused
	I1121 13:59:15.458127  297862 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:15.458144  297862 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:59:15.458593  297862 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:59:15.476864  297862 ssh_runner.go:195] Run: systemctl --version
	I1121 13:59:15.476937  297862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:59:15.495496  297862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:59:15.603202  297862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:59:15.603301  297862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:59:15.632676  297862 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 13:59:15.632697  297862 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 13:59:15.632702  297862 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 13:59:15.632706  297862 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 13:59:15.632709  297862 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 13:59:15.632712  297862 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 13:59:15.632716  297862 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 13:59:15.632719  297862 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 13:59:15.632722  297862 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 13:59:15.632729  297862 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 13:59:15.632733  297862 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 13:59:15.632736  297862 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 13:59:15.632740  297862 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 13:59:15.632744  297862 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 13:59:15.632752  297862 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 13:59:15.632762  297862 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 13:59:15.632766  297862 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 13:59:15.632770  297862 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 13:59:15.632773  297862 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 13:59:15.632776  297862 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 13:59:15.632780  297862 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 13:59:15.632783  297862 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 13:59:15.632786  297862 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 13:59:15.632789  297862 cri.go:89] found id: ""
	I1121 13:59:15.632838  297862 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:59:15.647904  297862 out.go:203] 
	W1121 13:59:15.651373  297862 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:59:15.651414  297862 out.go:285] * 
	* 
	W1121 13:59:15.656359  297862 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:59:15.659145  297862 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-494116 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-494116
helpers_test.go:243: (dbg) docker inspect addons-494116:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76",
	        "Created": "2025-11-21T13:56:52.980210617Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292223,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T13:56:53.060083115Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76/hostname",
	        "HostsPath": "/var/lib/docker/containers/e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76/hosts",
	        "LogPath": "/var/lib/docker/containers/e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76/e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76-json.log",
	        "Name": "/addons-494116",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-494116:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-494116",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e74411d169a0c590256d1172529311fd954008f3c840dabc7bc3e82f3d03cf76",
	                "LowerDir": "/var/lib/docker/overlay2/b16329cab56eeec1a57b3a7fc8e23d8becc0dc2af28741219de2d5e7efcbb21e-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b16329cab56eeec1a57b3a7fc8e23d8becc0dc2af28741219de2d5e7efcbb21e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b16329cab56eeec1a57b3a7fc8e23d8becc0dc2af28741219de2d5e7efcbb21e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b16329cab56eeec1a57b3a7fc8e23d8becc0dc2af28741219de2d5e7efcbb21e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-494116",
	                "Source": "/var/lib/docker/volumes/addons-494116/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-494116",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-494116",
	                "name.minikube.sigs.k8s.io": "addons-494116",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79d62b5fcc1fcb3e4f9091f14e0bbd056fa76c568576027ba5277b5908cb5326",
	            "SandboxKey": "/var/run/docker/netns/79d62b5fcc1f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-494116": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:ce:fc:f1:71:1b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "587264a7b645551a83ce3ffe958371206d7c19bdd86cc4c3f3fb0b4264d0950a",
	                    "EndpointID": "89d81790eddd7906eb2f05d3fdd58f8b77c2ee055f030450819822caa1d92169",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-494116",
	                        "e74411d169a0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-494116 -n addons-494116
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-494116 logs -n 25: (1.589353188s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-437513 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-437513   │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ delete  │ -p download-only-437513                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-437513   │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ start   │ -o=json --download-only -p download-only-946349 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-946349   │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ delete  │ -p download-only-946349                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-946349   │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ delete  │ -p download-only-437513                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-437513   │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ delete  │ -p download-only-946349                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-946349   │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ start   │ --download-only -p download-docker-223827 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-223827 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	│ delete  │ -p download-docker-223827                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-223827 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ start   │ --download-only -p binary-mirror-355307 --alsologtostderr --binary-mirror http://127.0.0.1:37741 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-355307   │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	│ delete  │ -p binary-mirror-355307                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-355307   │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ addons  │ enable dashboard -p addons-494116                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-494116                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	│ start   │ -p addons-494116 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:59 UTC │
	│ addons  │ addons-494116 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ addons  │ addons-494116 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	│ addons  │ enable headlamp -p addons-494116 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-494116          │ jenkins │ v1.37.0 │ 21 Nov 25 13:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 13:56:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 13:56:26.729816  291820 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:56:26.729942  291820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:26.729979  291820 out.go:374] Setting ErrFile to fd 2...
	I1121 13:56:26.730000  291820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:26.730270  291820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 13:56:26.730771  291820 out.go:368] Setting JSON to false
	I1121 13:56:26.731611  291820 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5939,"bootTime":1763727448,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 13:56:26.731723  291820 start.go:143] virtualization:  
	I1121 13:56:26.735163  291820 out.go:179] * [addons-494116] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 13:56:26.738222  291820 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 13:56:26.738355  291820 notify.go:221] Checking for updates...
	I1121 13:56:26.744055  291820 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 13:56:26.746898  291820 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 13:56:26.749755  291820 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 13:56:26.752553  291820 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 13:56:26.755438  291820 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 13:56:26.758561  291820 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 13:56:26.783802  291820 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 13:56:26.783947  291820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:26.850875  291820 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-21 13:56:26.84194549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 13:56:26.850984  291820 docker.go:319] overlay module found
	I1121 13:56:26.854046  291820 out.go:179] * Using the docker driver based on user configuration
	I1121 13:56:26.856895  291820 start.go:309] selected driver: docker
	I1121 13:56:26.856916  291820 start.go:930] validating driver "docker" against <nil>
	I1121 13:56:26.856931  291820 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 13:56:26.857675  291820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:26.909643  291820 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-21 13:56:26.900710987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 13:56:26.909787  291820 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 13:56:26.910023  291820 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 13:56:26.912881  291820 out.go:179] * Using Docker driver with root privileges
	I1121 13:56:26.915743  291820 cni.go:84] Creating CNI manager for ""
	I1121 13:56:26.915810  291820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:56:26.915830  291820 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 13:56:26.915958  291820 start.go:353] cluster config:
	{Name:addons-494116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-494116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1121 13:56:26.919032  291820 out.go:179] * Starting "addons-494116" primary control-plane node in "addons-494116" cluster
	I1121 13:56:26.921806  291820 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 13:56:26.924743  291820 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 13:56:26.928451  291820 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:56:26.928495  291820 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 13:56:26.928505  291820 cache.go:65] Caching tarball of preloaded images
	I1121 13:56:26.928596  291820 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 13:56:26.928608  291820 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 13:56:26.928960  291820 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/config.json ...
	I1121 13:56:26.928983  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/config.json: {Name:mk6b810371b11a03b9c7383d68ba56a04cef9656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:26.929153  291820 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 13:56:26.944697  291820 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:56:26.944818  291820 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1121 13:56:26.944844  291820 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1121 13:56:26.944849  291820 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1121 13:56:26.944860  291820 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1121 13:56:26.944869  291820 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from local cache
	I1121 13:56:44.880838  291820 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from cached tarball
	I1121 13:56:44.880874  291820 cache.go:243] Successfully downloaded all kic artifacts
	I1121 13:56:44.880902  291820 start.go:360] acquireMachinesLock for addons-494116: {Name:mk57a69ee47985a543fa348598b6ec0e32b4cb76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 13:56:44.881039  291820 start.go:364] duration metric: took 119.435µs to acquireMachinesLock for "addons-494116"
	I1121 13:56:44.881065  291820 start.go:93] Provisioning new machine with config: &{Name:addons-494116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-494116 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 13:56:44.881143  291820 start.go:125] createHost starting for "" (driver="docker")
	I1121 13:56:44.884563  291820 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1121 13:56:44.884801  291820 start.go:159] libmachine.API.Create for "addons-494116" (driver="docker")
	I1121 13:56:44.884838  291820 client.go:173] LocalClient.Create starting
	I1121 13:56:44.884961  291820 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem
	I1121 13:56:45.172232  291820 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem
	I1121 13:56:46.090506  291820 cli_runner.go:164] Run: docker network inspect addons-494116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 13:56:46.106635  291820 cli_runner.go:211] docker network inspect addons-494116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 13:56:46.106722  291820 network_create.go:284] running [docker network inspect addons-494116] to gather additional debugging logs...
	I1121 13:56:46.106744  291820 cli_runner.go:164] Run: docker network inspect addons-494116
	W1121 13:56:46.122638  291820 cli_runner.go:211] docker network inspect addons-494116 returned with exit code 1
	I1121 13:56:46.122670  291820 network_create.go:287] error running [docker network inspect addons-494116]: docker network inspect addons-494116: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-494116 not found
	I1121 13:56:46.122685  291820 network_create.go:289] output of [docker network inspect addons-494116]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-494116 not found
	
	** /stderr **
	I1121 13:56:46.122788  291820 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 13:56:46.140961  291820 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400199b230}
	I1121 13:56:46.141005  291820 network_create.go:124] attempt to create docker network addons-494116 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1121 13:56:46.141072  291820 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-494116 addons-494116
	I1121 13:56:46.200798  291820 network_create.go:108] docker network addons-494116 192.168.49.0/24 created
	I1121 13:56:46.200827  291820 kic.go:121] calculated static IP "192.168.49.2" for the "addons-494116" container
	I1121 13:56:46.200918  291820 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 13:56:46.216641  291820 cli_runner.go:164] Run: docker volume create addons-494116 --label name.minikube.sigs.k8s.io=addons-494116 --label created_by.minikube.sigs.k8s.io=true
	I1121 13:56:46.234821  291820 oci.go:103] Successfully created a docker volume addons-494116
	I1121 13:56:46.234906  291820 cli_runner.go:164] Run: docker run --rm --name addons-494116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-494116 --entrypoint /usr/bin/test -v addons-494116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 13:56:48.481816  291820 cli_runner.go:217] Completed: docker run --rm --name addons-494116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-494116 --entrypoint /usr/bin/test -v addons-494116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (2.246873865s)
	I1121 13:56:48.481848  291820 oci.go:107] Successfully prepared a docker volume addons-494116
	I1121 13:56:48.481902  291820 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:56:48.481912  291820 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 13:56:48.481972  291820 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-494116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 13:56:52.912298  291820 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-494116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.430284801s)
	I1121 13:56:52.912329  291820 kic.go:203] duration metric: took 4.430413812s to extract preloaded images to volume ...
	W1121 13:56:52.912513  291820 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 13:56:52.912627  291820 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 13:56:52.965421  291820 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-494116 --name addons-494116 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-494116 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-494116 --network addons-494116 --ip 192.168.49.2 --volume addons-494116:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 13:56:53.290587  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Running}}
	I1121 13:56:53.312476  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:56:53.336579  291820 cli_runner.go:164] Run: docker exec addons-494116 stat /var/lib/dpkg/alternatives/iptables
	I1121 13:56:53.389853  291820 oci.go:144] the created container "addons-494116" has a running status.
	I1121 13:56:53.389879  291820 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa...
	I1121 13:56:54.115307  291820 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 13:56:54.137424  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:56:54.164713  291820 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 13:56:54.164733  291820 kic_runner.go:114] Args: [docker exec --privileged addons-494116 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 13:56:54.217397  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:56:54.236808  291820 machine.go:94] provisionDockerMachine start ...
	I1121 13:56:54.237072  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:54.256822  291820 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:54.257288  291820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1121 13:56:54.257313  291820 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 13:56:54.403877  291820 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-494116
	
	I1121 13:56:54.403898  291820 ubuntu.go:182] provisioning hostname "addons-494116"
	I1121 13:56:54.403962  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:54.423901  291820 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:54.424537  291820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1121 13:56:54.424557  291820 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-494116 && echo "addons-494116" | sudo tee /etc/hostname
	I1121 13:56:54.577841  291820 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-494116
	
	I1121 13:56:54.577937  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:54.596785  291820 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:54.597085  291820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1121 13:56:54.597116  291820 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-494116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-494116/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-494116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 13:56:54.736781  291820 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 13:56:54.736808  291820 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 13:56:54.736839  291820 ubuntu.go:190] setting up certificates
	I1121 13:56:54.736855  291820 provision.go:84] configureAuth start
	I1121 13:56:54.736917  291820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-494116
	I1121 13:56:54.753975  291820 provision.go:143] copyHostCerts
	I1121 13:56:54.754066  291820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 13:56:54.754202  291820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 13:56:54.754266  291820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 13:56:54.754318  291820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.addons-494116 san=[127.0.0.1 192.168.49.2 addons-494116 localhost minikube]
	I1121 13:56:55.471745  291820 provision.go:177] copyRemoteCerts
	I1121 13:56:55.471826  291820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 13:56:55.471867  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:55.489556  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:56:55.588102  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 13:56:55.605998  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 13:56:55.623844  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 13:56:55.641969  291820 provision.go:87] duration metric: took 905.089341ms to configureAuth
	I1121 13:56:55.641995  291820 ubuntu.go:206] setting minikube options for container-runtime
	I1121 13:56:55.642190  291820 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:56:55.642289  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:55.659116  291820 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:55.659479  291820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1121 13:56:55.659496  291820 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 13:56:55.973162  291820 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 13:56:55.973185  291820 machine.go:97] duration metric: took 1.736328827s to provisionDockerMachine
	I1121 13:56:55.973196  291820 client.go:176] duration metric: took 11.088347531s to LocalClient.Create
	I1121 13:56:55.973210  291820 start.go:167] duration metric: took 11.088410743s to libmachine.API.Create "addons-494116"
	I1121 13:56:55.973218  291820 start.go:293] postStartSetup for "addons-494116" (driver="docker")
	I1121 13:56:55.973232  291820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 13:56:55.973298  291820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 13:56:55.973344  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:55.992652  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:56:56.092643  291820 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 13:56:56.096093  291820 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 13:56:56.096125  291820 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 13:56:56.096138  291820 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 13:56:56.096235  291820 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 13:56:56.096292  291820 start.go:296] duration metric: took 123.067355ms for postStartSetup
	I1121 13:56:56.096673  291820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-494116
	I1121 13:56:56.113143  291820 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/config.json ...
	I1121 13:56:56.113446  291820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 13:56:56.113494  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:56.130184  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:56:56.225265  291820 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 13:56:56.229839  291820 start.go:128] duration metric: took 11.348679178s to createHost
	I1121 13:56:56.229865  291820 start.go:83] releasing machines lock for "addons-494116", held for 11.348817616s
	I1121 13:56:56.229950  291820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-494116
	I1121 13:56:56.246720  291820 ssh_runner.go:195] Run: cat /version.json
	I1121 13:56:56.246772  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:56.246788  291820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 13:56:56.246849  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:56:56.270676  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:56:56.273950  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:56:56.367953  291820 ssh_runner.go:195] Run: systemctl --version
	I1121 13:56:56.467643  291820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 13:56:56.505583  291820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 13:56:56.510002  291820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 13:56:56.510074  291820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 13:56:56.538697  291820 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 13:56:56.538773  291820 start.go:496] detecting cgroup driver to use...
	I1121 13:56:56.538824  291820 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 13:56:56.538898  291820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 13:56:56.556605  291820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 13:56:56.569428  291820 docker.go:218] disabling cri-docker service (if available) ...
	I1121 13:56:56.569493  291820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 13:56:56.587130  291820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 13:56:56.605981  291820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 13:56:56.723958  291820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 13:56:56.838244  291820 docker.go:234] disabling docker service ...
	I1121 13:56:56.838320  291820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 13:56:56.858597  291820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 13:56:56.871734  291820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 13:56:56.988567  291820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 13:56:57.110459  291820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 13:56:57.122977  291820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 13:56:57.136725  291820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 13:56:57.136792  291820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.145298  291820 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 13:56:57.145417  291820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.154362  291820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.163016  291820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.171885  291820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 13:56:57.180315  291820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.188981  291820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.202151  291820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:57.210812  291820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 13:56:57.218792  291820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 13:56:57.226317  291820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 13:56:57.341083  291820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 13:56:57.517201  291820 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 13:56:57.517297  291820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 13:56:57.521327  291820 start.go:564] Will wait 60s for crictl version
	I1121 13:56:57.521402  291820 ssh_runner.go:195] Run: which crictl
	I1121 13:56:57.525236  291820 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 13:56:57.550046  291820 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 13:56:57.550149  291820 ssh_runner.go:195] Run: crio --version
	I1121 13:56:57.579451  291820 ssh_runner.go:195] Run: crio --version
	I1121 13:56:57.612819  291820 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 13:56:57.615689  291820 cli_runner.go:164] Run: docker network inspect addons-494116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 13:56:57.630884  291820 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1121 13:56:57.634921  291820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 13:56:57.644668  291820 kubeadm.go:884] updating cluster {Name:addons-494116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-494116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 13:56:57.644785  291820 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:56:57.644844  291820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 13:56:57.676751  291820 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 13:56:57.676775  291820 crio.go:433] Images already preloaded, skipping extraction
	I1121 13:56:57.676835  291820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 13:56:57.700771  291820 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 13:56:57.700797  291820 cache_images.go:86] Images are preloaded, skipping loading
	I1121 13:56:57.700805  291820 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1121 13:56:57.700897  291820 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-494116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-494116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 13:56:57.700984  291820 ssh_runner.go:195] Run: crio config
	I1121 13:56:57.758834  291820 cni.go:84] Creating CNI manager for ""
	I1121 13:56:57.758867  291820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:56:57.758886  291820 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 13:56:57.758909  291820 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-494116 NodeName:addons-494116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 13:56:57.759068  291820 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-494116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 13:56:57.759156  291820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 13:56:57.767215  291820 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 13:56:57.767359  291820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 13:56:57.775099  291820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1121 13:56:57.788186  291820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 13:56:57.801148  291820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1121 13:56:57.814308  291820 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1121 13:56:57.817892  291820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 13:56:57.827382  291820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 13:56:57.934722  291820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 13:56:57.950215  291820 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116 for IP: 192.168.49.2
	I1121 13:56:57.950238  291820 certs.go:195] generating shared ca certs ...
	I1121 13:56:57.950254  291820 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:57.950424  291820 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 13:56:58.340446  291820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt ...
	I1121 13:56:58.340479  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt: {Name:mk01ef5db40284bad7e0471d9cd816e60aef2b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:58.340701  291820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key ...
	I1121 13:56:58.340718  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key: {Name:mkddaed89356e289a6f4f6f92ae42c242180f1b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:58.340811  291820 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 13:56:59.233088  291820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt ...
	I1121 13:56:59.233123  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt: {Name:mkae5d1cd20f520064d91f058cc7cf77381cc0dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:59.233299  291820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key ...
	I1121 13:56:59.233312  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key: {Name:mkf3e8a12a260634981c0a73f3ea867340b04447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:59.233406  291820 certs.go:257] generating profile certs ...
	I1121 13:56:59.233464  291820 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.key
	I1121 13:56:59.233482  291820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt with IP's: []
	I1121 13:56:59.637624  291820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt ...
	I1121 13:56:59.637657  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: {Name:mk4153a46c912172ef9e929bdb69daf498e63595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:59.637844  291820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.key ...
	I1121 13:56:59.637858  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.key: {Name:mkb94890a5ce0a35bb57f616068aa0a91111d832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:59.637944  291820 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.key.8fb2f0cb
	I1121 13:56:59.637965  291820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.crt.8fb2f0cb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1121 13:57:00.552224  291820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.crt.8fb2f0cb ...
	I1121 13:57:00.552265  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.crt.8fb2f0cb: {Name:mk558f81bfae5871bad15b338e986a8536c6e4eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:57:00.552489  291820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.key.8fb2f0cb ...
	I1121 13:57:00.552511  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.key.8fb2f0cb: {Name:mkfd84d898dc10022503813840a4ac1695fb5ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:57:00.552593  291820 certs.go:382] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.crt.8fb2f0cb -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.crt
	I1121 13:57:00.552687  291820 certs.go:386] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.key.8fb2f0cb -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.key
	I1121 13:57:00.552738  291820 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.key
	I1121 13:57:00.552759  291820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.crt with IP's: []
	I1121 13:57:01.088130  291820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.crt ...
	I1121 13:57:01.088164  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.crt: {Name:mk94379303f53e5344b40a7999289ea8885bcde6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:57:01.088348  291820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.key ...
	I1121 13:57:01.088363  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.key: {Name:mkc4cde24a02f2ff2cdb01962dce6dda257c577d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:57:01.088575  291820 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 13:57:01.088622  291820 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 13:57:01.088649  291820 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 13:57:01.088680  291820 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 13:57:01.089608  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 13:57:01.111153  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 13:57:01.129574  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 13:57:01.148784  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 13:57:01.168220  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 13:57:01.188040  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 13:57:01.207518  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 13:57:01.226338  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 13:57:01.244460  291820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 13:57:01.263536  291820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 13:57:01.277620  291820 ssh_runner.go:195] Run: openssl version
	I1121 13:57:01.284224  291820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 13:57:01.293221  291820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 13:57:01.297477  291820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 13:57:01.297550  291820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 13:57:01.339475  291820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 13:57:01.348344  291820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 13:57:01.352101  291820 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 13:57:01.352151  291820 kubeadm.go:401] StartCluster: {Name:addons-494116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-494116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 13:57:01.352225  291820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:57:01.352289  291820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:57:01.381277  291820 cri.go:89] found id: ""
	I1121 13:57:01.381364  291820 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 13:57:01.389616  291820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 13:57:01.397714  291820 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 13:57:01.397833  291820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 13:57:01.406076  291820 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 13:57:01.406102  291820 kubeadm.go:158] found existing configuration files:
	
	I1121 13:57:01.406155  291820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 13:57:01.414329  291820 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 13:57:01.414417  291820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 13:57:01.422408  291820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 13:57:01.430855  291820 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 13:57:01.430951  291820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 13:57:01.438812  291820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 13:57:01.447299  291820 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 13:57:01.447373  291820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 13:57:01.455362  291820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 13:57:01.463798  291820 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 13:57:01.463895  291820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 13:57:01.471749  291820 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 13:57:01.538851  291820 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 13:57:01.539138  291820 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 13:57:01.610817  291820 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 13:57:16.606218  291820 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 13:57:16.606274  291820 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 13:57:16.606366  291820 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 13:57:16.606423  291820 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 13:57:16.606459  291820 kubeadm.go:319] OS: Linux
	I1121 13:57:16.606509  291820 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 13:57:16.606559  291820 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 13:57:16.606608  291820 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 13:57:16.606658  291820 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 13:57:16.606708  291820 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 13:57:16.606760  291820 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 13:57:16.606807  291820 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 13:57:16.606857  291820 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 13:57:16.606905  291820 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 13:57:16.606980  291820 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 13:57:16.607079  291820 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 13:57:16.607172  291820 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 13:57:16.607236  291820 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 13:57:16.610181  291820 out.go:252]   - Generating certificates and keys ...
	I1121 13:57:16.610369  291820 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 13:57:16.610463  291820 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 13:57:16.610539  291820 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 13:57:16.610609  291820 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 13:57:16.610688  291820 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 13:57:16.610746  291820 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 13:57:16.610813  291820 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 13:57:16.610966  291820 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-494116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 13:57:16.611069  291820 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 13:57:16.611231  291820 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-494116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 13:57:16.611341  291820 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 13:57:16.611444  291820 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 13:57:16.611503  291820 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 13:57:16.611568  291820 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 13:57:16.611627  291820 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 13:57:16.611692  291820 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 13:57:16.611761  291820 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 13:57:16.611835  291820 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 13:57:16.611896  291820 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 13:57:16.611987  291820 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 13:57:16.612061  291820 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 13:57:16.615205  291820 out.go:252]   - Booting up control plane ...
	I1121 13:57:16.615323  291820 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 13:57:16.615412  291820 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 13:57:16.615504  291820 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 13:57:16.615648  291820 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 13:57:16.615788  291820 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 13:57:16.615923  291820 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 13:57:16.616021  291820 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 13:57:16.616066  291820 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 13:57:16.616205  291820 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 13:57:16.616319  291820 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 13:57:16.616413  291820 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50089421s
	I1121 13:57:16.616519  291820 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 13:57:16.616632  291820 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1121 13:57:16.616743  291820 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 13:57:16.616852  291820 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 13:57:16.616941  291820 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.623433392s
	I1121 13:57:16.617021  291820 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.168963206s
	I1121 13:57:16.617111  291820 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001384069s
	I1121 13:57:16.617235  291820 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 13:57:16.617373  291820 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 13:57:16.617449  291820 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 13:57:16.617687  291820 kubeadm.go:319] [mark-control-plane] Marking the node addons-494116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 13:57:16.617766  291820 kubeadm.go:319] [bootstrap-token] Using token: 3aw9oe.cyfsti0enmout33u
	I1121 13:57:16.622688  291820 out.go:252]   - Configuring RBAC rules ...
	I1121 13:57:16.622859  291820 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 13:57:16.622971  291820 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 13:57:16.623148  291820 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 13:57:16.623342  291820 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 13:57:16.623505  291820 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 13:57:16.623631  291820 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 13:57:16.623773  291820 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 13:57:16.623854  291820 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 13:57:16.623921  291820 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 13:57:16.623949  291820 kubeadm.go:319] 
	I1121 13:57:16.624058  291820 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 13:57:16.624072  291820 kubeadm.go:319] 
	I1121 13:57:16.624154  291820 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 13:57:16.624163  291820 kubeadm.go:319] 
	I1121 13:57:16.624190  291820 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 13:57:16.624260  291820 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 13:57:16.624321  291820 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 13:57:16.624333  291820 kubeadm.go:319] 
	I1121 13:57:16.624409  291820 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 13:57:16.624452  291820 kubeadm.go:319] 
	I1121 13:57:16.624541  291820 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 13:57:16.624554  291820 kubeadm.go:319] 
	I1121 13:57:16.624636  291820 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 13:57:16.624752  291820 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 13:57:16.624864  291820 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 13:57:16.624908  291820 kubeadm.go:319] 
	I1121 13:57:16.625022  291820 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 13:57:16.625116  291820 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 13:57:16.625124  291820 kubeadm.go:319] 
	I1121 13:57:16.625240  291820 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3aw9oe.cyfsti0enmout33u \
	I1121 13:57:16.625362  291820 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 \
	I1121 13:57:16.625389  291820 kubeadm.go:319] 	--control-plane 
	I1121 13:57:16.625396  291820 kubeadm.go:319] 
	I1121 13:57:16.625523  291820 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 13:57:16.625561  291820 kubeadm.go:319] 
	I1121 13:57:16.625677  291820 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3aw9oe.cyfsti0enmout33u \
	I1121 13:57:16.625833  291820 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 
	I1121 13:57:16.625870  291820 cni.go:84] Creating CNI manager for ""
	I1121 13:57:16.625882  291820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:57:16.629003  291820 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 13:57:16.631910  291820 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 13:57:16.636659  291820 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 13:57:16.636681  291820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 13:57:16.649468  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 13:57:16.940450  291820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 13:57:16.940579  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:16.940659  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-494116 minikube.k8s.io/updated_at=2025_11_21T13_57_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=addons-494116 minikube.k8s.io/primary=true
	I1121 13:57:17.181868  291820 ops.go:34] apiserver oom_adj: -16
	I1121 13:57:17.181972  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:17.682140  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:18.182529  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:18.682913  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:19.182468  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:19.682519  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:20.182156  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:20.682093  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:21.182130  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:21.682677  291820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:57:21.792148  291820 kubeadm.go:1114] duration metric: took 4.851615238s to wait for elevateKubeSystemPrivileges
	I1121 13:57:21.792178  291820 kubeadm.go:403] duration metric: took 20.440031677s to StartCluster
	I1121 13:57:21.792195  291820 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:57:21.792303  291820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 13:57:21.792733  291820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:57:21.792966  291820 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 13:57:21.793066  291820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 13:57:21.793325  291820 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:57:21.793370  291820 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1121 13:57:21.793489  291820 addons.go:70] Setting yakd=true in profile "addons-494116"
	I1121 13:57:21.793508  291820 addons.go:239] Setting addon yakd=true in "addons-494116"
	I1121 13:57:21.793534  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.793512  291820 addons.go:70] Setting inspektor-gadget=true in profile "addons-494116"
	I1121 13:57:21.793585  291820 addons.go:239] Setting addon inspektor-gadget=true in "addons-494116"
	I1121 13:57:21.793636  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.794011  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.794219  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.794611  291820 addons.go:70] Setting metrics-server=true in profile "addons-494116"
	I1121 13:57:21.794630  291820 addons.go:239] Setting addon metrics-server=true in "addons-494116"
	I1121 13:57:21.794660  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.795078  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.796960  291820 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-494116"
	I1121 13:57:21.796990  291820 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-494116"
	I1121 13:57:21.797018  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.797444  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.797891  291820 addons.go:70] Setting cloud-spanner=true in profile "addons-494116"
	I1121 13:57:21.797916  291820 addons.go:239] Setting addon cloud-spanner=true in "addons-494116"
	I1121 13:57:21.797941  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.798342  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.803931  291820 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-494116"
	I1121 13:57:21.803962  291820 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-494116"
	I1121 13:57:21.804006  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.804497  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.808368  291820 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-494116"
	I1121 13:57:21.808443  291820 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-494116"
	I1121 13:57:21.808474  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.808931  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.814077  291820 addons.go:70] Setting registry=true in profile "addons-494116"
	I1121 13:57:21.814165  291820 addons.go:239] Setting addon registry=true in "addons-494116"
	I1121 13:57:21.814286  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.847397  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.817209  291820 addons.go:70] Setting registry-creds=true in profile "addons-494116"
	I1121 13:57:21.866895  291820 addons.go:239] Setting addon registry-creds=true in "addons-494116"
	I1121 13:57:21.866965  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.817228  291820 addons.go:70] Setting storage-provisioner=true in profile "addons-494116"
	I1121 13:57:21.876846  291820 addons.go:239] Setting addon storage-provisioner=true in "addons-494116"
	I1121 13:57:21.876913  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.877479  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.817412  291820 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-494116"
	I1121 13:57:21.886785  291820 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-494116"
	I1121 13:57:21.887133  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.887527  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.817422  291820 addons.go:70] Setting volcano=true in profile "addons-494116"
	I1121 13:57:21.898555  291820 addons.go:239] Setting addon volcano=true in "addons-494116"
	I1121 13:57:21.898596  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.899049  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.907154  291820 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1121 13:57:21.910028  291820 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1121 13:57:21.910159  291820 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1121 13:57:21.910170  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1121 13:57:21.910228  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:21.817434  291820 addons.go:70] Setting volumesnapshots=true in profile "addons-494116"
	I1121 13:57:21.921367  291820 addons.go:239] Setting addon volumesnapshots=true in "addons-494116"
	I1121 13:57:21.921442  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.922052  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.817608  291820 out.go:179] * Verifying Kubernetes components...
	I1121 13:57:21.819339  291820 addons.go:70] Setting default-storageclass=true in profile "addons-494116"
	I1121 13:57:21.930394  291820 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-494116"
	I1121 13:57:21.930714  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.819353  291820 addons.go:70] Setting gcp-auth=true in profile "addons-494116"
	I1121 13:57:21.940538  291820 mustload.go:66] Loading cluster: addons-494116
	I1121 13:57:21.940742  291820 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:57:21.941004  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.950647  291820 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 13:57:21.950669  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1121 13:57:21.950726  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:21.819361  291820 addons.go:70] Setting ingress=true in profile "addons-494116"
	I1121 13:57:21.962533  291820 addons.go:239] Setting addon ingress=true in "addons-494116"
	I1121 13:57:21.962675  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.963485  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:21.819370  291820 addons.go:70] Setting ingress-dns=true in profile "addons-494116"
	I1121 13:57:21.990305  291820 addons.go:239] Setting addon ingress-dns=true in "addons-494116"
	I1121 13:57:21.990365  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:21.990880  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:22.005341  291820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 13:57:22.010189  291820 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1121 13:57:22.015395  291820 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 13:57:22.015422  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1121 13:57:22.015506  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.066714  291820 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1121 13:57:22.076584  291820 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1121 13:57:22.076654  291820 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1121 13:57:22.076781  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.077089  291820 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 13:57:22.086242  291820 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 13:57:22.086335  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 13:57:22.086439  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.120165  291820 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1121 13:57:22.123271  291820 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1121 13:57:22.123378  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1121 13:57:22.123492  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.140160  291820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 13:57:22.163695  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1121 13:57:22.169145  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1121 13:57:22.172204  291820 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1121 13:57:22.174997  291820 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 13:57:22.175018  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1121 13:57:22.175079  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	W1121 13:57:22.194949  291820 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1121 13:57:22.196695  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1121 13:57:22.197791  291820 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1121 13:57:22.199793  291820 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1121 13:57:22.203087  291820 addons.go:239] Setting addon default-storageclass=true in "addons-494116"
	I1121 13:57:22.203145  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:22.203605  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:22.208652  291820 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 13:57:22.208673  291820 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 13:57:22.208737  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.216516  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1121 13:57:22.218989  291820 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 13:57:22.219238  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:22.246949  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1121 13:57:22.247001  291820 out.go:179]   - Using image docker.io/registry:3.0.0
	I1121 13:57:22.247062  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.236102  291820 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-494116"
	I1121 13:57:22.247558  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:22.251073  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:22.264631  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.265423  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.271881  291820 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1121 13:57:22.271963  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1121 13:57:22.272020  291820 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1121 13:57:22.272029  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1121 13:57:22.272094  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.279742  291820 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 13:57:22.280098  291820 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1121 13:57:22.280793  291820 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1121 13:57:22.280873  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.280156  291820 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 13:57:22.282466  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1121 13:57:22.282539  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.296735  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1121 13:57:22.299762  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1121 13:57:22.305881  291820 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1121 13:57:22.308672  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1121 13:57:22.308699  291820 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1121 13:57:22.308784  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.330375  291820 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1121 13:57:22.334325  291820 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 13:57:22.334351  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1121 13:57:22.334416  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.362823  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.389701  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.392615  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.402936  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.409489  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.420477  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.441072  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.448763  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	W1121 13:57:22.450515  291820 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 13:57:22.450545  291820 retry.go:31] will retry after 339.163073ms: ssh: handshake failed: EOF
	I1121 13:57:22.464559  291820 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 13:57:22.464580  291820 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 13:57:22.464650  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.491115  291820 out.go:179]   - Using image docker.io/busybox:stable
	I1121 13:57:22.494142  291820 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1121 13:57:22.498245  291820 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 13:57:22.498268  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1121 13:57:22.498339  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:22.520624  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	W1121 13:57:22.525022  291820 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 13:57:22.525032  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.525049  291820 retry.go:31] will retry after 138.757203ms: ssh: handshake failed: EOF
	I1121 13:57:22.556125  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:22.560313  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	W1121 13:57:22.561524  291820 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 13:57:22.561548  291820 retry.go:31] will retry after 194.866906ms: ssh: handshake failed: EOF
	I1121 13:57:22.585633  291820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1121 13:57:22.665051  291820 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 13:57:22.665133  291820 retry.go:31] will retry after 345.097013ms: ssh: handshake failed: EOF
	I1121 13:57:22.948600  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1121 13:57:23.011498  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 13:57:23.013939  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 13:57:23.017493  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 13:57:23.017827  291820 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1121 13:57:23.017864  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1121 13:57:23.020426  291820 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1121 13:57:23.020491  291820 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1121 13:57:23.034695  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 13:57:23.035647  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 13:57:23.037833  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1121 13:57:23.116473  291820 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1121 13:57:23.116496  291820 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1121 13:57:23.139429  291820 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1121 13:57:23.139460  291820 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1121 13:57:23.147599  291820 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1121 13:57:23.147621  291820 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1121 13:57:23.176861  291820 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1121 13:57:23.176935  291820 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1121 13:57:23.178486  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 13:57:23.210735  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 13:57:23.226133  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1121 13:57:23.226210  291820 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1121 13:57:23.268222  291820 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1121 13:57:23.268294  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1121 13:57:23.269033  291820 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1121 13:57:23.269082  291820 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1121 13:57:23.293729  291820 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.153530463s)
	I1121 13:57:23.293756  291820 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1121 13:57:23.295258  291820 node_ready.go:35] waiting up to 6m0s for node "addons-494116" to be "Ready" ...
	I1121 13:57:23.374818  291820 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 13:57:23.374892  291820 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1121 13:57:23.406411  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1121 13:57:23.406488  291820 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1121 13:57:23.436974  291820 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1121 13:57:23.437051  291820 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1121 13:57:23.461906  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1121 13:57:23.480097  291820 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1121 13:57:23.480171  291820 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1121 13:57:23.564258  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 13:57:23.567024  291820 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1121 13:57:23.567091  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1121 13:57:23.603483  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 13:57:23.633480  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1121 13:57:23.633556  291820 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1121 13:57:23.652353  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1121 13:57:23.652440  291820 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1121 13:57:23.689552  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1121 13:57:23.689632  291820 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1121 13:57:23.722836  291820 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1121 13:57:23.722913  291820 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1121 13:57:23.761398  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1121 13:57:23.797069  291820 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-494116" context rescaled to 1 replicas
	I1121 13:57:23.915353  291820 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1121 13:57:23.915376  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1121 13:57:23.917926  291820 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 13:57:23.917945  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1121 13:57:24.155174  291820 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1121 13:57:24.155200  291820 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1121 13:57:24.167522  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 13:57:24.350452  291820 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1121 13:57:24.350476  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1121 13:57:24.596616  291820 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1121 13:57:24.596641  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1121 13:57:24.858252  291820 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 13:57:24.858282  291820 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1121 13:57:24.989612  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.040920593s)
	I1121 13:57:24.989704  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.978130835s)
	I1121 13:57:24.989746  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.975736924s)
	I1121 13:57:25.153081  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1121 13:57:25.326756  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:26.821777  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.804201922s)
	I1121 13:57:26.821855  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.78709922s)
	I1121 13:57:27.725894  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.690175519s)
	I1121 13:57:27.725926  291820 addons.go:495] Verifying addon ingress=true in "addons-494116"
	I1121 13:57:27.726148  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.688250576s)
	I1121 13:57:27.726220  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.547664303s)
	I1121 13:57:27.726287  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.515492076s)
	I1121 13:57:27.726507  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.264527904s)
	I1121 13:57:27.726527  291820 addons.go:495] Verifying addon registry=true in "addons-494116"
	I1121 13:57:27.726968  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.162642593s)
	I1121 13:57:27.726988  291820 addons.go:495] Verifying addon metrics-server=true in "addons-494116"
	I1121 13:57:27.727031  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.123479736s)
	I1121 13:57:27.727071  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.965601344s)
	I1121 13:57:27.727308  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.559754622s)
	W1121 13:57:27.728074  291820 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 13:57:27.728095  291820 retry.go:31] will retry after 182.892821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 13:57:27.728916  291820 out.go:179] * Verifying ingress addon...
	I1121 13:57:27.730898  291820 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-494116 service yakd-dashboard -n yakd-dashboard
	
	I1121 13:57:27.730897  291820 out.go:179] * Verifying registry addon...
	I1121 13:57:27.734695  291820 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1121 13:57:27.734695  291820 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1121 13:57:27.744656  291820 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 13:57:27.744676  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:27.745007  291820 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1121 13:57:27.745022  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:27.805358  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:27.911653  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 13:57:28.099584  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.946452826s)
	I1121 13:57:28.099666  291820 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-494116"
	I1121 13:57:28.102619  291820 out.go:179] * Verifying csi-hostpath-driver addon...
	I1121 13:57:28.106371  291820 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1121 13:57:28.119007  291820 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 13:57:28.119030  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:28.239193  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:28.239326  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:28.609846  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:28.739413  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:28.739862  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:29.109700  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:29.238894  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:29.239005  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:29.609932  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:29.738655  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:29.738788  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:29.848009  291820 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1121 13:57:29.848098  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:29.865625  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:29.979353  291820 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1121 13:57:30.004198  291820 addons.go:239] Setting addon gcp-auth=true in "addons-494116"
	I1121 13:57:30.004252  291820 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:57:30.004761  291820 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:57:30.038251  291820 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1121 13:57:30.038309  291820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:57:30.089299  291820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:57:30.110688  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:30.237957  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:30.238292  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:30.299042  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:30.610573  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:30.708661  291820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.796910559s)
	I1121 13:57:30.711981  291820 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 13:57:30.715030  291820 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1121 13:57:30.717780  291820 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1121 13:57:30.717803  291820 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1121 13:57:30.733288  291820 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1121 13:57:30.733311  291820 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1121 13:57:30.740114  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:30.740317  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:30.747990  291820 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 13:57:30.748014  291820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1121 13:57:30.761129  291820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 13:57:31.112561  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:31.241036  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:31.253566  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:31.274461  291820 addons.go:495] Verifying addon gcp-auth=true in "addons-494116"
	I1121 13:57:31.277402  291820 out.go:179] * Verifying gcp-auth addon...
	I1121 13:57:31.281131  291820 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1121 13:57:31.290354  291820 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1121 13:57:31.290381  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:31.610210  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:31.738643  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:31.738793  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:31.784366  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:32.109314  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:32.239648  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:32.240032  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:32.284878  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:32.609864  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:32.738675  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:32.738881  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:32.785084  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:32.803248  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:33.109999  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:33.238179  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:33.238588  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:33.284155  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:33.609770  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:33.737808  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:33.738176  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:33.785055  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:34.110398  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:34.239223  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:34.239349  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:34.285372  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:34.610054  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:34.738416  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:34.738770  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:34.784700  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:35.109990  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:35.238329  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:35.238626  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:35.284514  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:35.298208  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:35.610459  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:35.738795  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:35.739239  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:35.784256  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:36.110057  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:36.238245  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:36.238352  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:36.284275  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:36.609565  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:36.737918  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:36.738160  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:36.783939  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:37.110548  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:37.238528  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:37.238941  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:37.284750  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:37.298246  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:37.609576  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:37.738903  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:37.738947  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:37.784515  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:38.109667  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:38.238907  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:38.239250  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:38.283964  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:38.609979  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:38.738211  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:38.738376  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:38.784027  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:39.109741  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:39.237618  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:39.238029  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:39.285123  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:39.298938  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:39.609914  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:39.737946  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:39.738258  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:39.784088  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:40.111249  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:40.238825  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:40.238972  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:40.284836  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:40.609765  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:40.737963  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:40.738009  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:40.784638  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:41.110518  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:41.238649  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:41.238736  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:41.284974  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:41.609756  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:41.738757  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:41.738892  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:41.785025  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:41.802563  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:42.110151  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:42.239168  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:42.239531  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:42.284444  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:42.609679  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:42.738726  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:42.738970  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:42.786111  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:43.110713  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:43.238712  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:43.238865  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:43.284654  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:43.609962  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:43.738745  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:43.738901  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:43.785082  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:44.109430  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:44.241827  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:44.242047  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:44.285017  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:44.298820  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:44.609977  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:44.737987  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:44.738963  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:44.784655  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:45.111792  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:45.240336  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:45.243327  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:45.285536  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:45.609672  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:45.739206  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:45.739650  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:45.784512  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:46.109444  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:46.238433  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:46.238566  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:46.284489  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:46.610551  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:46.739432  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:46.739543  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:46.784219  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:46.801905  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:47.110641  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:47.237838  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:47.238710  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:47.284632  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:47.609260  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:47.738639  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:47.738744  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:47.787795  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:48.109601  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:48.238100  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:48.238305  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:48.284276  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:48.610226  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:48.739556  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:48.739949  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:48.784767  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:49.110553  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:49.238068  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:49.238102  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:49.284621  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:49.298387  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:49.609440  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:49.739187  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:49.740347  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:49.784106  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:50.109582  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:50.239157  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:50.239306  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:50.284209  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:50.609891  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:50.738000  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:50.740407  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:50.784575  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:51.110673  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:51.238007  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:51.238208  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:51.285028  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:51.298919  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:51.610217  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:51.739712  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:51.739762  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:51.784903  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:52.110076  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:52.238417  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:52.238588  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:52.284219  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:52.610748  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:52.738154  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:52.738417  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:52.784101  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:53.110443  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:53.238554  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:53.238689  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:53.284505  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:53.609566  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:53.739368  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:53.740159  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:53.784231  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:53.801528  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:54.110325  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:54.238645  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:54.238830  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:54.284657  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:54.610329  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:54.738706  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:54.739820  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:54.785075  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:55.109972  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:55.238413  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:55.238628  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:55.284234  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:55.610133  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:55.738526  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:55.740818  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:55.784682  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:56.110077  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:56.238205  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:56.238495  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:56.284432  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:56.298265  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:56.609532  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:56.738095  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:56.739268  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:56.783987  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:57.110330  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:57.238754  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:57.239055  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:57.284838  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:57.609504  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:57.739743  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:57.739819  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:57.784606  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:58.109648  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:58.238702  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:58.238963  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:58.284578  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:57:58.298556  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:57:58.609807  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:58.738999  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:58.739134  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:58.784007  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:59.109757  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:59.238170  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:59.238379  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:59.284583  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:59.609479  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:59.739736  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:59.739824  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:59.784585  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:00.111037  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:00.285201  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:00.285419  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:00.297334  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1121 13:58:00.315547  291820 node_ready.go:57] node "addons-494116" has "Ready":"False" status (will retry)
	I1121 13:58:00.610046  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:00.738235  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:00.738664  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:00.784438  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:01.109171  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:01.238437  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:01.238685  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:01.284876  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:01.610613  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:01.738705  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:01.738895  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:01.786621  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:02.148946  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:02.262634  291820 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 13:58:02.262663  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:02.262776  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:02.308831  291820 node_ready.go:49] node "addons-494116" is "Ready"
	I1121 13:58:02.308864  291820 node_ready.go:38] duration metric: took 39.01357496s for node "addons-494116" to be "Ready" ...
	I1121 13:58:02.308878  291820 api_server.go:52] waiting for apiserver process to appear ...
	I1121 13:58:02.308937  291820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 13:58:02.318544  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:02.332431  291820 api_server.go:72] duration metric: took 40.539432432s to wait for apiserver process to appear ...
	I1121 13:58:02.332461  291820 api_server.go:88] waiting for apiserver healthz status ...
	I1121 13:58:02.332483  291820 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1121 13:58:02.344046  291820 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1121 13:58:02.345666  291820 api_server.go:141] control plane version: v1.34.1
	I1121 13:58:02.345699  291820 api_server.go:131] duration metric: took 13.230612ms to wait for apiserver health ...
	I1121 13:58:02.345709  291820 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 13:58:02.354169  291820 system_pods.go:59] 19 kube-system pods found
	I1121 13:58:02.354220  291820 system_pods.go:61] "coredns-66bc5c9577-frfnw" [6ef6e1fd-b7ba-4c77-ad3d-5bc589360cc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:58:02.354231  291820 system_pods.go:61] "csi-hostpath-attacher-0" [e0486779-420d-4eb9-bccd-3cfd26e61825] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:58:02.354242  291820 system_pods.go:61] "csi-hostpath-resizer-0" [ac63a392-f7c4-44db-a7fe-586b0d4bc265] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:58:02.354247  291820 system_pods.go:61] "csi-hostpathplugin-l2g77" [15e321e1-1e6a-4260-b28e-0d9f8af1f143] Pending
	I1121 13:58:02.354260  291820 system_pods.go:61] "etcd-addons-494116" [075ec525-a3c6-4137-aefb-3379eb8ef3c1] Running
	I1121 13:58:02.354274  291820 system_pods.go:61] "kindnet-5wkpj" [dd9b231b-1e87-4f12-a860-c02bf7976209] Running
	I1121 13:58:02.354280  291820 system_pods.go:61] "kube-apiserver-addons-494116" [5d485f91-710c-496c-b37f-7f6929814de6] Running
	I1121 13:58:02.354285  291820 system_pods.go:61] "kube-controller-manager-addons-494116" [2c1bf1fb-98e2-406c-b84d-43a747873724] Running
	I1121 13:58:02.354294  291820 system_pods.go:61] "kube-ingress-dns-minikube" [f0a8f2eb-75c6-478e-9434-463008f212b6] Pending
	I1121 13:58:02.354299  291820 system_pods.go:61] "kube-proxy-cnpzl" [bbd71e9d-2f4a-493e-80d9-47059ebffa52] Running
	I1121 13:58:02.354303  291820 system_pods.go:61] "kube-scheduler-addons-494116" [db9f8e07-36fe-4c14-9d9a-f6009b0d60d0] Running
	I1121 13:58:02.354308  291820 system_pods.go:61] "metrics-server-85b7d694d7-5ptdb" [519fa634-5010-4714-80e8-5c6021451227] Pending
	I1121 13:58:02.354314  291820 system_pods.go:61] "nvidia-device-plugin-daemonset-tkkkl" [8f752345-52f8-4288-8728-33e535a60746] Pending
	I1121 13:58:02.354319  291820 system_pods.go:61] "registry-6b586f9694-cvgwr" [d4804f42-0759-4095-942d-fd20e6892955] Pending
	I1121 13:58:02.354327  291820 system_pods.go:61] "registry-creds-764b6fb674-sl95w" [b6f343b9-9d5d-4236-a7b8-a958f297db46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:58:02.354339  291820 system_pods.go:61] "registry-proxy-mlm5l" [cc2fce13-0044-46ba-9760-4efa6201f3f3] Pending
	I1121 13:58:02.354352  291820 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jrcpn" [e4aa5f65-e65c-4b8f-b775-80cf9ef5f801] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.354362  291820 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vckmf" [53b85d18-2d86-4032-8f17-7be89eaa9beb] Pending
	I1121 13:58:02.354368  291820 system_pods.go:61] "storage-provisioner" [d930299d-8e9d-4e9a-907a-15d7167e4f56] Pending
	I1121 13:58:02.354378  291820 system_pods.go:74] duration metric: took 8.663643ms to wait for pod list to return data ...
	I1121 13:58:02.354386  291820 default_sa.go:34] waiting for default service account to be created ...
	I1121 13:58:02.378988  291820 default_sa.go:45] found service account: "default"
	I1121 13:58:02.379026  291820 default_sa.go:55] duration metric: took 24.618259ms for default service account to be created ...
	I1121 13:58:02.379038  291820 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 13:58:02.408073  291820 system_pods.go:86] 19 kube-system pods found
	I1121 13:58:02.408117  291820 system_pods.go:89] "coredns-66bc5c9577-frfnw" [6ef6e1fd-b7ba-4c77-ad3d-5bc589360cc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:58:02.408127  291820 system_pods.go:89] "csi-hostpath-attacher-0" [e0486779-420d-4eb9-bccd-3cfd26e61825] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:58:02.408135  291820 system_pods.go:89] "csi-hostpath-resizer-0" [ac63a392-f7c4-44db-a7fe-586b0d4bc265] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:58:02.408139  291820 system_pods.go:89] "csi-hostpathplugin-l2g77" [15e321e1-1e6a-4260-b28e-0d9f8af1f143] Pending
	I1121 13:58:02.408143  291820 system_pods.go:89] "etcd-addons-494116" [075ec525-a3c6-4137-aefb-3379eb8ef3c1] Running
	I1121 13:58:02.408148  291820 system_pods.go:89] "kindnet-5wkpj" [dd9b231b-1e87-4f12-a860-c02bf7976209] Running
	I1121 13:58:02.408152  291820 system_pods.go:89] "kube-apiserver-addons-494116" [5d485f91-710c-496c-b37f-7f6929814de6] Running
	I1121 13:58:02.408156  291820 system_pods.go:89] "kube-controller-manager-addons-494116" [2c1bf1fb-98e2-406c-b84d-43a747873724] Running
	I1121 13:58:02.408161  291820 system_pods.go:89] "kube-ingress-dns-minikube" [f0a8f2eb-75c6-478e-9434-463008f212b6] Pending
	I1121 13:58:02.408166  291820 system_pods.go:89] "kube-proxy-cnpzl" [bbd71e9d-2f4a-493e-80d9-47059ebffa52] Running
	I1121 13:58:02.408171  291820 system_pods.go:89] "kube-scheduler-addons-494116" [db9f8e07-36fe-4c14-9d9a-f6009b0d60d0] Running
	I1121 13:58:02.408182  291820 system_pods.go:89] "metrics-server-85b7d694d7-5ptdb" [519fa634-5010-4714-80e8-5c6021451227] Pending
	I1121 13:58:02.408186  291820 system_pods.go:89] "nvidia-device-plugin-daemonset-tkkkl" [8f752345-52f8-4288-8728-33e535a60746] Pending
	I1121 13:58:02.408194  291820 system_pods.go:89] "registry-6b586f9694-cvgwr" [d4804f42-0759-4095-942d-fd20e6892955] Pending
	I1121 13:58:02.408201  291820 system_pods.go:89] "registry-creds-764b6fb674-sl95w" [b6f343b9-9d5d-4236-a7b8-a958f297db46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:58:02.408211  291820 system_pods.go:89] "registry-proxy-mlm5l" [cc2fce13-0044-46ba-9760-4efa6201f3f3] Pending
	I1121 13:58:02.408218  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jrcpn" [e4aa5f65-e65c-4b8f-b775-80cf9ef5f801] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.408231  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vckmf" [53b85d18-2d86-4032-8f17-7be89eaa9beb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.408235  291820 system_pods.go:89] "storage-provisioner" [d930299d-8e9d-4e9a-907a-15d7167e4f56] Pending
	I1121 13:58:02.408250  291820 retry.go:31] will retry after 265.505455ms: missing components: kube-dns
	I1121 13:58:02.620035  291820 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 13:58:02.620070  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:02.682447  291820 system_pods.go:86] 19 kube-system pods found
	I1121 13:58:02.682485  291820 system_pods.go:89] "coredns-66bc5c9577-frfnw" [6ef6e1fd-b7ba-4c77-ad3d-5bc589360cc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:58:02.682495  291820 system_pods.go:89] "csi-hostpath-attacher-0" [e0486779-420d-4eb9-bccd-3cfd26e61825] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:58:02.682502  291820 system_pods.go:89] "csi-hostpath-resizer-0" [ac63a392-f7c4-44db-a7fe-586b0d4bc265] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:58:02.682518  291820 system_pods.go:89] "csi-hostpathplugin-l2g77" [15e321e1-1e6a-4260-b28e-0d9f8af1f143] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:58:02.682524  291820 system_pods.go:89] "etcd-addons-494116" [075ec525-a3c6-4137-aefb-3379eb8ef3c1] Running
	I1121 13:58:02.682529  291820 system_pods.go:89] "kindnet-5wkpj" [dd9b231b-1e87-4f12-a860-c02bf7976209] Running
	I1121 13:58:02.682534  291820 system_pods.go:89] "kube-apiserver-addons-494116" [5d485f91-710c-496c-b37f-7f6929814de6] Running
	I1121 13:58:02.682545  291820 system_pods.go:89] "kube-controller-manager-addons-494116" [2c1bf1fb-98e2-406c-b84d-43a747873724] Running
	I1121 13:58:02.682551  291820 system_pods.go:89] "kube-ingress-dns-minikube" [f0a8f2eb-75c6-478e-9434-463008f212b6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:58:02.682563  291820 system_pods.go:89] "kube-proxy-cnpzl" [bbd71e9d-2f4a-493e-80d9-47059ebffa52] Running
	I1121 13:58:02.682568  291820 system_pods.go:89] "kube-scheduler-addons-494116" [db9f8e07-36fe-4c14-9d9a-f6009b0d60d0] Running
	I1121 13:58:02.682572  291820 system_pods.go:89] "metrics-server-85b7d694d7-5ptdb" [519fa634-5010-4714-80e8-5c6021451227] Pending
	I1121 13:58:02.682576  291820 system_pods.go:89] "nvidia-device-plugin-daemonset-tkkkl" [8f752345-52f8-4288-8728-33e535a60746] Pending
	I1121 13:58:02.682591  291820 system_pods.go:89] "registry-6b586f9694-cvgwr" [d4804f42-0759-4095-942d-fd20e6892955] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:58:02.682602  291820 system_pods.go:89] "registry-creds-764b6fb674-sl95w" [b6f343b9-9d5d-4236-a7b8-a958f297db46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:58:02.682608  291820 system_pods.go:89] "registry-proxy-mlm5l" [cc2fce13-0044-46ba-9760-4efa6201f3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:58:02.682617  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jrcpn" [e4aa5f65-e65c-4b8f-b775-80cf9ef5f801] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.682624  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vckmf" [53b85d18-2d86-4032-8f17-7be89eaa9beb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.682630  291820 system_pods.go:89] "storage-provisioner" [d930299d-8e9d-4e9a-907a-15d7167e4f56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:58:02.682650  291820 retry.go:31] will retry after 291.611485ms: missing components: kube-dns
	I1121 13:58:02.745621  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:02.746018  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:02.785421  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:02.980137  291820 system_pods.go:86] 19 kube-system pods found
	I1121 13:58:02.980175  291820 system_pods.go:89] "coredns-66bc5c9577-frfnw" [6ef6e1fd-b7ba-4c77-ad3d-5bc589360cc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:58:02.980193  291820 system_pods.go:89] "csi-hostpath-attacher-0" [e0486779-420d-4eb9-bccd-3cfd26e61825] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:58:02.980203  291820 system_pods.go:89] "csi-hostpath-resizer-0" [ac63a392-f7c4-44db-a7fe-586b0d4bc265] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:58:02.980215  291820 system_pods.go:89] "csi-hostpathplugin-l2g77" [15e321e1-1e6a-4260-b28e-0d9f8af1f143] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:58:02.980224  291820 system_pods.go:89] "etcd-addons-494116" [075ec525-a3c6-4137-aefb-3379eb8ef3c1] Running
	I1121 13:58:02.980230  291820 system_pods.go:89] "kindnet-5wkpj" [dd9b231b-1e87-4f12-a860-c02bf7976209] Running
	I1121 13:58:02.980241  291820 system_pods.go:89] "kube-apiserver-addons-494116" [5d485f91-710c-496c-b37f-7f6929814de6] Running
	I1121 13:58:02.980246  291820 system_pods.go:89] "kube-controller-manager-addons-494116" [2c1bf1fb-98e2-406c-b84d-43a747873724] Running
	I1121 13:58:02.980253  291820 system_pods.go:89] "kube-ingress-dns-minikube" [f0a8f2eb-75c6-478e-9434-463008f212b6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:58:02.980269  291820 system_pods.go:89] "kube-proxy-cnpzl" [bbd71e9d-2f4a-493e-80d9-47059ebffa52] Running
	I1121 13:58:02.980275  291820 system_pods.go:89] "kube-scheduler-addons-494116" [db9f8e07-36fe-4c14-9d9a-f6009b0d60d0] Running
	I1121 13:58:02.980290  291820 system_pods.go:89] "metrics-server-85b7d694d7-5ptdb" [519fa634-5010-4714-80e8-5c6021451227] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:58:02.980298  291820 system_pods.go:89] "nvidia-device-plugin-daemonset-tkkkl" [8f752345-52f8-4288-8728-33e535a60746] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:58:02.980304  291820 system_pods.go:89] "registry-6b586f9694-cvgwr" [d4804f42-0759-4095-942d-fd20e6892955] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:58:02.980314  291820 system_pods.go:89] "registry-creds-764b6fb674-sl95w" [b6f343b9-9d5d-4236-a7b8-a958f297db46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:58:02.980322  291820 system_pods.go:89] "registry-proxy-mlm5l" [cc2fce13-0044-46ba-9760-4efa6201f3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:58:02.980333  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jrcpn" [e4aa5f65-e65c-4b8f-b775-80cf9ef5f801] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.980352  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vckmf" [53b85d18-2d86-4032-8f17-7be89eaa9beb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:02.980358  291820 system_pods.go:89] "storage-provisioner" [d930299d-8e9d-4e9a-907a-15d7167e4f56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:58:02.980375  291820 retry.go:31] will retry after 304.560831ms: missing components: kube-dns
	I1121 13:58:03.111625  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:03.256781  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:03.258481  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:03.332042  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:03.332977  291820 system_pods.go:86] 19 kube-system pods found
	I1121 13:58:03.333009  291820 system_pods.go:89] "coredns-66bc5c9577-frfnw" [6ef6e1fd-b7ba-4c77-ad3d-5bc589360cc4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:58:03.333051  291820 system_pods.go:89] "csi-hostpath-attacher-0" [e0486779-420d-4eb9-bccd-3cfd26e61825] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:58:03.333071  291820 system_pods.go:89] "csi-hostpath-resizer-0" [ac63a392-f7c4-44db-a7fe-586b0d4bc265] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:58:03.333080  291820 system_pods.go:89] "csi-hostpathplugin-l2g77" [15e321e1-1e6a-4260-b28e-0d9f8af1f143] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:58:03.333090  291820 system_pods.go:89] "etcd-addons-494116" [075ec525-a3c6-4137-aefb-3379eb8ef3c1] Running
	I1121 13:58:03.333094  291820 system_pods.go:89] "kindnet-5wkpj" [dd9b231b-1e87-4f12-a860-c02bf7976209] Running
	I1121 13:58:03.333100  291820 system_pods.go:89] "kube-apiserver-addons-494116" [5d485f91-710c-496c-b37f-7f6929814de6] Running
	I1121 13:58:03.333117  291820 system_pods.go:89] "kube-controller-manager-addons-494116" [2c1bf1fb-98e2-406c-b84d-43a747873724] Running
	I1121 13:58:03.333123  291820 system_pods.go:89] "kube-ingress-dns-minikube" [f0a8f2eb-75c6-478e-9434-463008f212b6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:58:03.333129  291820 system_pods.go:89] "kube-proxy-cnpzl" [bbd71e9d-2f4a-493e-80d9-47059ebffa52] Running
	I1121 13:58:03.333140  291820 system_pods.go:89] "kube-scheduler-addons-494116" [db9f8e07-36fe-4c14-9d9a-f6009b0d60d0] Running
	I1121 13:58:03.333148  291820 system_pods.go:89] "metrics-server-85b7d694d7-5ptdb" [519fa634-5010-4714-80e8-5c6021451227] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:58:03.333160  291820 system_pods.go:89] "nvidia-device-plugin-daemonset-tkkkl" [8f752345-52f8-4288-8728-33e535a60746] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:58:03.333167  291820 system_pods.go:89] "registry-6b586f9694-cvgwr" [d4804f42-0759-4095-942d-fd20e6892955] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:58:03.333173  291820 system_pods.go:89] "registry-creds-764b6fb674-sl95w" [b6f343b9-9d5d-4236-a7b8-a958f297db46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:58:03.333187  291820 system_pods.go:89] "registry-proxy-mlm5l" [cc2fce13-0044-46ba-9760-4efa6201f3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:58:03.333198  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jrcpn" [e4aa5f65-e65c-4b8f-b775-80cf9ef5f801] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:03.333205  291820 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vckmf" [53b85d18-2d86-4032-8f17-7be89eaa9beb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:58:03.333215  291820 system_pods.go:89] "storage-provisioner" [d930299d-8e9d-4e9a-907a-15d7167e4f56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:58:03.333225  291820 system_pods.go:126] duration metric: took 954.180104ms to wait for k8s-apps to be running ...
	I1121 13:58:03.333238  291820 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 13:58:03.333299  291820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 13:58:03.362513  291820 system_svc.go:56] duration metric: took 29.258403ms WaitForService to wait for kubelet
	I1121 13:58:03.362553  291820 kubeadm.go:587] duration metric: took 41.569558954s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 13:58:03.362571  291820 node_conditions.go:102] verifying NodePressure condition ...
	I1121 13:58:03.381953  291820 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 13:58:03.382000  291820 node_conditions.go:123] node cpu capacity is 2
	I1121 13:58:03.382014  291820 node_conditions.go:105] duration metric: took 19.437575ms to run NodePressure ...
	I1121 13:58:03.382028  291820 start.go:242] waiting for startup goroutines ...
	I1121 13:58:03.611054  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:03.739924  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:03.740329  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:03.784941  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:04.111128  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:04.239494  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:04.239634  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:04.285041  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:04.610661  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:04.742257  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:04.742422  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:04.793258  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:05.110200  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:05.240509  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:05.241076  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:05.284886  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:05.610706  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:05.737886  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:05.738086  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:05.784256  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:06.109834  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:06.240291  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:06.240783  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:06.345261  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:06.610693  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:06.738269  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:06.738443  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:06.784550  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:07.110464  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:07.239228  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:07.240482  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:07.340959  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:07.611237  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:07.739827  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:07.740056  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:07.784207  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:08.110575  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:08.239537  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:08.239976  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:08.285137  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:08.610544  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:08.738801  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:08.739081  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:08.784341  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:09.110594  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:09.238851  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:09.238958  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:09.284356  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:09.611148  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:09.739907  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:09.740986  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:09.785245  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:10.111096  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:10.239787  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:10.240128  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:10.285139  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:10.611491  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:10.739784  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:10.740270  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:10.786250  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:11.110348  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:11.239021  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:11.239225  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:11.284183  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:11.611111  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:11.740076  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:11.741238  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:11.784994  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:12.110125  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:12.240143  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:12.240527  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:12.284814  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:12.620324  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:12.741664  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:12.741762  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:12.785092  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:13.112338  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:13.245141  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:13.245647  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:13.286475  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:13.613009  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:13.741321  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:13.741732  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:13.787140  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:14.112843  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:14.243679  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:14.244042  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:14.287435  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:14.613021  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:14.744074  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:14.745064  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:14.787613  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:15.115121  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:15.241801  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:15.242927  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:15.287661  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:15.616120  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:15.741990  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:15.742403  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:15.785076  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:16.111787  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:16.242158  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:16.243490  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:16.288233  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:16.615633  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:16.742311  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:16.742460  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:16.787793  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:17.110432  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:17.240550  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:17.241726  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:17.285153  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:17.613913  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:17.738894  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:17.739820  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:17.784525  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:18.110930  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:18.238849  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:18.239516  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:18.284718  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:18.611450  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:18.739911  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:18.740120  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:18.784115  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:19.110659  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:19.238724  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:19.239486  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:19.284726  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:19.610362  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:19.738433  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:19.739142  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:19.784196  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:20.114268  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:20.240680  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:20.241150  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:20.284178  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:20.610472  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:20.740002  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:20.740286  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:20.784534  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:21.110311  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:21.239928  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:21.240043  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:21.285224  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:21.611837  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:21.739749  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:21.740478  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:21.784781  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:22.110369  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:22.239546  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:22.240649  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:22.284549  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:22.610102  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:22.741896  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:22.742244  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:22.785254  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:23.109802  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:23.239033  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:23.239620  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:23.284617  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:23.610765  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:23.739196  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:23.739404  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:23.784103  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:24.111077  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:24.240046  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:24.240465  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:24.284749  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:24.610170  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:24.739291  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:24.740273  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:24.784419  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:25.110800  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:25.238643  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:25.239340  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:25.284600  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:25.647637  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:25.739438  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:25.739554  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:25.784319  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:26.109963  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:26.239677  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:26.240358  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:26.284615  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:26.610182  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:26.741243  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:26.742380  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:26.784906  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:27.110651  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:27.239445  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:27.239818  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:27.285058  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:27.611461  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:27.740615  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:27.741040  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:27.784757  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:28.110502  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:28.239036  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:28.239149  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:28.285000  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:28.612703  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:28.737808  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:28.739084  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:28.785082  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:29.111195  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:29.240230  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:29.240407  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:29.284224  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:29.619001  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:29.739940  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:29.740078  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:29.783745  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:30.111426  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:30.238842  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:30.239785  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:30.284601  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:30.610597  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:30.739508  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:30.740658  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:30.784689  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:31.110575  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:31.239025  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:31.239203  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:31.284004  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:31.611204  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:31.739871  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:31.740231  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:31.783977  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:32.110642  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:32.238210  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:32.238337  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:32.287109  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:32.610799  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:32.740579  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:32.740888  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:32.783675  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:33.110516  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:33.238932  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:33.239081  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:33.283986  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:33.610726  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:33.738456  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:33.738851  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:33.784620  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:34.110119  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:34.238751  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:34.238850  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:34.285423  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:34.609790  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:34.738709  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:34.738878  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:34.785151  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:35.112591  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:35.242201  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:35.242506  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:35.284606  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:35.647984  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:35.754603  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:35.754866  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:35.796695  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:36.110797  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:36.238232  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:36.239701  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:36.284586  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:36.609476  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:36.738428  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:36.739436  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:36.784582  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:37.110203  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:37.240076  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:37.240473  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:37.284467  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:37.611333  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:37.739408  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:37.739709  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:37.784160  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:38.111390  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:38.239443  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:38.239757  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:38.284333  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:38.612517  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:38.738827  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:38.739071  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:38.784943  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:39.110412  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:39.238744  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:39.239087  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:39.284920  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:39.610918  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:39.739199  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:39.739320  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:39.784430  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:40.110423  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:40.240715  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:40.241327  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:40.283934  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:40.611246  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:40.740281  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:40.740550  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:40.784527  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:41.110421  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:41.239800  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:41.240211  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:41.285219  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:41.610943  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:41.738971  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:41.739115  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:41.784979  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:42.111530  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:42.238671  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:58:42.238846  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:42.284908  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:42.615591  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:42.739457  291820 kapi.go:107] duration metric: took 1m15.004759298s to wait for kubernetes.io/minikube-addons=registry ...
	I1121 13:58:42.739734  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:42.784663  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:43.110759  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:43.238643  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:43.285727  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:43.611828  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:43.738478  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:43.784564  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:44.113593  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:44.237803  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:44.284746  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:44.611497  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:44.738787  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:44.784904  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:45.117748  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:45.264041  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:45.286223  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:45.611514  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:45.739086  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:45.784219  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:46.110290  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:46.238358  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:46.284195  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:46.611130  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:46.738595  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:46.784147  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:47.111353  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:47.238709  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:47.284765  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:47.610998  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:47.737982  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:47.785244  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:48.110609  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:48.238832  291820 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:58:48.284942  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:48.612191  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:48.743792  291820 kapi.go:107] duration metric: took 1m21.009096583s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1121 13:58:48.843552  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:49.110120  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:49.284331  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:49.610563  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:49.784185  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:50.111079  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:50.285601  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:50.615553  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:50.786329  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:51.110265  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:51.284449  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:51.610584  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:51.786902  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:52.110488  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:52.284811  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:52.610315  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:52.784722  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:53.110598  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:53.287674  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:53.610418  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:53.785108  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:54.110054  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:54.284748  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:54.610573  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:54.786498  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:55.113028  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:55.284670  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:55.611706  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:55.785036  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:56.111733  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:56.285632  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:58:56.611337  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:56.785298  291820 kapi.go:107] duration metric: took 1m25.504165091s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1121 13:58:56.788536  291820 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-494116 cluster.
	I1121 13:58:56.791486  291820 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1121 13:58:56.794408  291820 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1121 13:58:57.110009  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:57.610938  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:58.110311  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:58.611090  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:59.109298  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:58:59.610947  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:00.119986  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:00.610612  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:01.110668  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:01.627584  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:02.110279  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:02.610928  291820 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:59:03.109919  291820 kapi.go:107] duration metric: took 1m35.003550632s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1121 13:59:03.112926  291820 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, default-storageclass, inspektor-gadget, registry-creds, metrics-server, ingress-dns, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1121 13:59:03.115881  291820 addons.go:530] duration metric: took 1m41.322490207s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin nvidia-device-plugin storage-provisioner default-storageclass inspektor-gadget registry-creds metrics-server ingress-dns yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1121 13:59:03.115960  291820 start.go:247] waiting for cluster config update ...
	I1121 13:59:03.115984  291820 start.go:256] writing updated cluster config ...
	I1121 13:59:03.116300  291820 ssh_runner.go:195] Run: rm -f paused
	I1121 13:59:03.121007  291820 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 13:59:03.124458  291820 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-frfnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.129443  291820 pod_ready.go:94] pod "coredns-66bc5c9577-frfnw" is "Ready"
	I1121 13:59:03.129530  291820 pod_ready.go:86] duration metric: took 5.039005ms for pod "coredns-66bc5c9577-frfnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.132117  291820 pod_ready.go:83] waiting for pod "etcd-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.137066  291820 pod_ready.go:94] pod "etcd-addons-494116" is "Ready"
	I1121 13:59:03.137098  291820 pod_ready.go:86] duration metric: took 4.951153ms for pod "etcd-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.139274  291820 pod_ready.go:83] waiting for pod "kube-apiserver-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.144201  291820 pod_ready.go:94] pod "kube-apiserver-addons-494116" is "Ready"
	I1121 13:59:03.144230  291820 pod_ready.go:86] duration metric: took 4.925339ms for pod "kube-apiserver-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.146854  291820 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.525033  291820 pod_ready.go:94] pod "kube-controller-manager-addons-494116" is "Ready"
	I1121 13:59:03.525059  291820 pod_ready.go:86] duration metric: took 378.177301ms for pod "kube-controller-manager-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:03.725519  291820 pod_ready.go:83] waiting for pod "kube-proxy-cnpzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:04.125107  291820 pod_ready.go:94] pod "kube-proxy-cnpzl" is "Ready"
	I1121 13:59:04.125137  291820 pod_ready.go:86] duration metric: took 399.590479ms for pod "kube-proxy-cnpzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:04.325672  291820 pod_ready.go:83] waiting for pod "kube-scheduler-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:04.725180  291820 pod_ready.go:94] pod "kube-scheduler-addons-494116" is "Ready"
	I1121 13:59:04.725212  291820 pod_ready.go:86] duration metric: took 399.506556ms for pod "kube-scheduler-addons-494116" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:59:04.725232  291820 pod_ready.go:40] duration metric: took 1.604192565s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 13:59:04.778664  291820 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 13:59:04.781930  291820 out.go:179] * Done! kubectl is now configured to use "addons-494116" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 13:59:05 addons-494116 crio[832]: time="2025-11-21T13:59:05.865620944Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 13:59:07 addons-494116 crio[832]: time="2025-11-21T13:59:07.905837206Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=e90f58a0-6d11-4911-be48-3f921b61dbe7 name=/runtime.v1.ImageService/PullImage
	Nov 21 13:59:07 addons-494116 crio[832]: time="2025-11-21T13:59:07.906532332Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c52e3205-8465-48ac-bac2-9e5e7b55ffaa name=/runtime.v1.ImageService/ImageStatus
	Nov 21 13:59:07 addons-494116 crio[832]: time="2025-11-21T13:59:07.909488904Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f46f60aa-1f71-4519-bb61-a602cf9ead09 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 13:59:07 addons-494116 crio[832]: time="2025-11-21T13:59:07.917807806Z" level=info msg="Creating container: default/busybox/busybox" id=56d03fe2-94e4-480e-8fa7-8c774d67d81a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 13:59:07 addons-494116 crio[832]: time="2025-11-21T13:59:07.917939762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 13:59:07 addons-494116 crio[832]: time="2025-11-21T13:59:07.924665424Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 13:59:07 addons-494116 crio[832]: time="2025-11-21T13:59:07.925230654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 13:59:07 addons-494116 crio[832]: time="2025-11-21T13:59:07.940998594Z" level=info msg="Created container 48c9a7485d851ca642430adea35bf025f745fe83327bc793ad29e39a8414b685: default/busybox/busybox" id=56d03fe2-94e4-480e-8fa7-8c774d67d81a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 13:59:07 addons-494116 crio[832]: time="2025-11-21T13:59:07.944028882Z" level=info msg="Starting container: 48c9a7485d851ca642430adea35bf025f745fe83327bc793ad29e39a8414b685" id=6c77ec2f-a360-4b15-b92f-cf627390ff13 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 13:59:07 addons-494116 crio[832]: time="2025-11-21T13:59:07.948342845Z" level=info msg="Started container" PID=4904 containerID=48c9a7485d851ca642430adea35bf025f745fe83327bc793ad29e39a8414b685 description=default/busybox/busybox id=6c77ec2f-a360-4b15-b92f-cf627390ff13 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1701d8660feca3eb5d0d8264bb4569d5060f48e5e94ecb0eecce9f532cc6d8e4
	Nov 21 13:59:15 addons-494116 crio[832]: time="2025-11-21T13:59:15.972765674Z" level=info msg="Removing container: c8640cc1179116e9b0591ba8eb80d50894f50dfe49d3ebbce84e96ca60fa7f32" id=050ffec7-d18a-42b0-a483-f5bf92fc2a30 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 13:59:15 addons-494116 crio[832]: time="2025-11-21T13:59:15.975607604Z" level=info msg="Error loading conmon cgroup of container c8640cc1179116e9b0591ba8eb80d50894f50dfe49d3ebbce84e96ca60fa7f32: cgroup deleted" id=050ffec7-d18a-42b0-a483-f5bf92fc2a30 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 13:59:16 addons-494116 crio[832]: time="2025-11-21T13:59:16.004068488Z" level=info msg="Removed container c8640cc1179116e9b0591ba8eb80d50894f50dfe49d3ebbce84e96ca60fa7f32: gcp-auth/gcp-auth-certs-patch-ztnfh/patch" id=050ffec7-d18a-42b0-a483-f5bf92fc2a30 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 13:59:16 addons-494116 crio[832]: time="2025-11-21T13:59:16.012077661Z" level=info msg="Removing container: 042625a3156841da6ee10849f232afc929531633a0bb0e1c0385f057c271b313" id=af32e36b-cd3c-4e0e-9bd7-d334eac9ce8f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 13:59:16 addons-494116 crio[832]: time="2025-11-21T13:59:16.015386711Z" level=info msg="Error loading conmon cgroup of container 042625a3156841da6ee10849f232afc929531633a0bb0e1c0385f057c271b313: cgroup deleted" id=af32e36b-cd3c-4e0e-9bd7-d334eac9ce8f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 13:59:16 addons-494116 crio[832]: time="2025-11-21T13:59:16.044559592Z" level=info msg="Removed container 042625a3156841da6ee10849f232afc929531633a0bb0e1c0385f057c271b313: gcp-auth/gcp-auth-certs-create-cbsg2/create" id=af32e36b-cd3c-4e0e-9bd7-d334eac9ce8f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 13:59:16 addons-494116 crio[832]: time="2025-11-21T13:59:16.050273342Z" level=info msg="Stopping pod sandbox: 6fbf395d6bd806bfa28c0477bc9ef34142c6ad5afd061790443ccea91c8baf10" id=e1dca135-54ff-4d01-8b48-7f6d86a5ff26 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 13:59:16 addons-494116 crio[832]: time="2025-11-21T13:59:16.050336661Z" level=info msg="Stopped pod sandbox (already stopped): 6fbf395d6bd806bfa28c0477bc9ef34142c6ad5afd061790443ccea91c8baf10" id=e1dca135-54ff-4d01-8b48-7f6d86a5ff26 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 13:59:16 addons-494116 crio[832]: time="2025-11-21T13:59:16.054005549Z" level=info msg="Removing pod sandbox: 6fbf395d6bd806bfa28c0477bc9ef34142c6ad5afd061790443ccea91c8baf10" id=15c2f64d-45b8-48c3-aa2c-aa037fdecaba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 13:59:16 addons-494116 crio[832]: time="2025-11-21T13:59:16.07247415Z" level=info msg="Removed pod sandbox: 6fbf395d6bd806bfa28c0477bc9ef34142c6ad5afd061790443ccea91c8baf10" id=15c2f64d-45b8-48c3-aa2c-aa037fdecaba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 13:59:16 addons-494116 crio[832]: time="2025-11-21T13:59:16.073333856Z" level=info msg="Stopping pod sandbox: ba57c133cae21dee19e65802c0fa2157fc271744adac4b41d4321a2b7c13a996" id=01a62357-2e67-4430-96fc-04cb55648148 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 13:59:16 addons-494116 crio[832]: time="2025-11-21T13:59:16.073428683Z" level=info msg="Stopped pod sandbox (already stopped): ba57c133cae21dee19e65802c0fa2157fc271744adac4b41d4321a2b7c13a996" id=01a62357-2e67-4430-96fc-04cb55648148 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 13:59:16 addons-494116 crio[832]: time="2025-11-21T13:59:16.07379685Z" level=info msg="Removing pod sandbox: ba57c133cae21dee19e65802c0fa2157fc271744adac4b41d4321a2b7c13a996" id=2b4d70eb-c194-44f6-8f2c-e333c561eaf1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 13:59:16 addons-494116 crio[832]: time="2025-11-21T13:59:16.083449118Z" level=info msg="Removed pod sandbox: ba57c133cae21dee19e65802c0fa2157fc271744adac4b41d4321a2b7c13a996" id=2b4d70eb-c194-44f6-8f2c-e333c561eaf1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	48c9a7485d851       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          9 seconds ago        Running             busybox                                  0                   1701d8660feca       busybox                                    default
	e4320f5fe8895       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   2faa2d4fc3e92       csi-hostpathplugin-l2g77                   kube-system
	22aad46f46903       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          16 seconds ago       Running             csi-provisioner                          0                   2faa2d4fc3e92       csi-hostpathplugin-l2g77                   kube-system
	34ffe03bcd4d1       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            18 seconds ago       Running             liveness-probe                           0                   2faa2d4fc3e92       csi-hostpathplugin-l2g77                   kube-system
	c5469a211994e       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           19 seconds ago       Running             hostpath                                 0                   2faa2d4fc3e92       csi-hostpathplugin-l2g77                   kube-system
	274e8dd2dab30       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 20 seconds ago       Running             gcp-auth                                 0                   3f040642b57a1       gcp-auth-78565c9fb4-c7vmg                  gcp-auth
	305f9dbd1c9ad       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            23 seconds ago       Running             gadget                                   0                   acf5292852d4b       gadget-mndpk                               gadget
	4298f174eb879       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                27 seconds ago       Running             node-driver-registrar                    0                   2faa2d4fc3e92       csi-hostpathplugin-l2g77                   kube-system
	ae692940a4fca       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             28 seconds ago       Running             controller                               0                   ddb9d715cce10       ingress-nginx-controller-6c8bf45fb-2z7nm   ingress-nginx
	1e877b39bef08       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              35 seconds ago       Running             registry-proxy                           0                   eaa8a939445d4       registry-proxy-mlm5l                       kube-system
	e272b0978ee11       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   38 seconds ago       Exited              patch                                    0                   fe12046627b5a       ingress-nginx-admission-patch-2v528        ingress-nginx
	5d49e8d42c411       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     39 seconds ago       Running             nvidia-device-plugin-ctr                 0                   200db2bc9d56a       nvidia-device-plugin-daemonset-tkkkl       kube-system
	d84d7295e480c       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             44 seconds ago       Running             local-path-provisioner                   0                   e05e7695a99f5       local-path-provisioner-648f6765c9-v9sg7    local-path-storage
	3c55ac84412c8       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      45 seconds ago       Running             volume-snapshot-controller               0                   36bcec8d3db3f       snapshot-controller-7d9fbc56b8-vckmf       kube-system
	3e9d7de7df80e       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           45 seconds ago       Running             registry                                 0                   4694b44e7f0c1       registry-6b586f9694-cvgwr                  kube-system
	15f09ce47d75a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      47 seconds ago       Running             volume-snapshot-controller               0                   53ec72bd9840c       snapshot-controller-7d9fbc56b8-jrcpn       kube-system
	61d5ed18a54c6       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               48 seconds ago       Running             minikube-ingress-dns                     0                   bf96dc5563166       kube-ingress-dns-minikube                  kube-system
	0ac555261f857       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              56 seconds ago       Running             csi-resizer                              0                   643d7ec3c1de2       csi-hostpath-resizer-0                     kube-system
	f601cd1551b26       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   58 seconds ago       Running             csi-external-health-monitor-controller   0                   2faa2d4fc3e92       csi-hostpathplugin-l2g77                   kube-system
	fed54d28f384c       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              59 seconds ago       Running             yakd                                     0                   afc401bf413ae       yakd-dashboard-5ff678cb9-7w57n             yakd-dashboard
	de59a02962926       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   855cf723d691d       metrics-server-85b7d694d7-5ptdb            kube-system
	c6ab4e55a2c1d       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   9b3c478a9fb15       cloud-spanner-emulator-6f9fcf858b-rkw7z    default
	3c3896dadd82d       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   a80fa09884be1       csi-hostpath-attacher-0                    kube-system
	229383db24a6a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   57fc47afa660f       ingress-nginx-admission-create-lfq45       ingress-nginx
	a443f1743ed06       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   735e255b76b8a       storage-provisioner                        kube-system
	6fa60b05394e1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   1ea1833af98f9       coredns-66bc5c9577-frfnw                   kube-system
	d401871bd196a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             About a minute ago   Running             kindnet-cni                              0                   7e0422275aadd       kindnet-5wkpj                              kube-system
	013fd68042616       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             About a minute ago   Running             kube-proxy                               0                   21f7822690244       kube-proxy-cnpzl                           kube-system
	562af98fdae9f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   81c18f639ff6f       kube-controller-manager-addons-494116      kube-system
	870089e2cb7cf       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   0817640a1717b       kube-scheduler-addons-494116               kube-system
	753f8d0dbe26a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   80cfb52a4de88       kube-apiserver-addons-494116               kube-system
	1b81e66733803       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   4b082c79b9e7a       etcd-addons-494116                         kube-system
	
	
	==> coredns [6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a] <==
	[INFO] 10.244.0.17:55988 - 59021 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000127377s
	[INFO] 10.244.0.17:55988 - 31285 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.005349193s
	[INFO] 10.244.0.17:55988 - 9123 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.005882194s
	[INFO] 10.244.0.17:55988 - 58930 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000160674s
	[INFO] 10.244.0.17:55988 - 11118 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00009422s
	[INFO] 10.244.0.17:39435 - 33042 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000192764s
	[INFO] 10.244.0.17:39435 - 32854 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095541s
	[INFO] 10.244.0.17:49808 - 52757 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126483s
	[INFO] 10.244.0.17:49808 - 52995 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094031s
	[INFO] 10.244.0.17:57414 - 17751 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000119024s
	[INFO] 10.244.0.17:57414 - 17948 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000227455s
	[INFO] 10.244.0.17:55996 - 57939 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002389127s
	[INFO] 10.244.0.17:55996 - 58367 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004000016s
	[INFO] 10.244.0.17:43890 - 61648 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000199812s
	[INFO] 10.244.0.17:43890 - 61911 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000109031s
	[INFO] 10.244.0.21:48138 - 15433 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000214245s
	[INFO] 10.244.0.21:54092 - 17955 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000186725s
	[INFO] 10.244.0.21:57929 - 20022 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00025647s
	[INFO] 10.244.0.21:45880 - 16769 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000272158s
	[INFO] 10.244.0.21:41860 - 28684 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000468269s
	[INFO] 10.244.0.21:58165 - 56316 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000644796s
	[INFO] 10.244.0.21:58383 - 32930 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002938299s
	[INFO] 10.244.0.21:44814 - 20389 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00341502s
	[INFO] 10.244.0.21:37526 - 61926 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002089752s
	[INFO] 10.244.0.21:50522 - 15444 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001688125s
	
	
	==> describe nodes <==
	Name:               addons-494116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-494116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=addons-494116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T13_57_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-494116
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-494116"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 13:57:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-494116
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 13:59:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 13:58:58 +0000   Fri, 21 Nov 2025 13:57:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 13:58:58 +0000   Fri, 21 Nov 2025 13:57:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 13:58:58 +0000   Fri, 21 Nov 2025 13:57:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 13:58:58 +0000   Fri, 21 Nov 2025 13:58:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-494116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                c3d2669c-a077-4dc1-a6d1-95f3950011ce
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-6f9fcf858b-rkw7z     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  gadget                      gadget-mndpk                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  gcp-auth                    gcp-auth-78565c9fb4-c7vmg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-2z7nm    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         110s
	  kube-system                 coredns-66bc5c9577-frfnw                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     116s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 csi-hostpathplugin-l2g77                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 etcd-addons-494116                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-5wkpj                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-addons-494116                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-addons-494116       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-cnpzl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-addons-494116                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 metrics-server-85b7d694d7-5ptdb             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         112s
	  kube-system                 nvidia-device-plugin-daemonset-tkkkl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 registry-6b586f9694-cvgwr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 registry-creds-764b6fb674-sl95w             0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 registry-proxy-mlm5l                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 snapshot-controller-7d9fbc56b8-jrcpn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 snapshot-controller-7d9fbc56b8-vckmf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  local-path-storage          local-path-provisioner-648f6765c9-v9sg7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-7w57n              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     110s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 115s  kube-proxy       
	  Normal   Starting                 2m2s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m2s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m1s  kubelet          Node addons-494116 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m1s  kubelet          Node addons-494116 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m1s  kubelet          Node addons-494116 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           117s  node-controller  Node addons-494116 event: Registered Node addons-494116 in Controller
	  Normal   NodeReady                75s   kubelet          Node addons-494116 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015310] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503949] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032916] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.894651] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.192036] kauditd_printk_skb: 36 callbacks suppressed
	[Nov21 12:49] hrtimer: interrupt took 26907018 ns
	[Nov21 13:55] kauditd_printk_skb: 8 callbacks suppressed
	[Nov21 13:57] overlayfs: idmapped layers are currently not supported
	[  +0.074753] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7] <==
	{"level":"warn","ts":"2025-11-21T13:57:12.136200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.151377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.166507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.184953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.221901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.231437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.248134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.269092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.304215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.310260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.326801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.368219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.391268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.407731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.424597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.457500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.470477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.507274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:12.561039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:28.308046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:28.328779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:50.391777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:50.406623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:50.442223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:57:50.449183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48232","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [274e8dd2dab304ec0d549c501c66439bc79c49a303e2f1f5be056305820a80c8] <==
	2025/11/21 13:58:56 GCP Auth Webhook started!
	2025/11/21 13:59:05 Ready to marshal response ...
	2025/11/21 13:59:05 Ready to write response ...
	2025/11/21 13:59:05 Ready to marshal response ...
	2025/11/21 13:59:05 Ready to write response ...
	2025/11/21 13:59:05 Ready to marshal response ...
	2025/11/21 13:59:05 Ready to write response ...
	
	
	==> kernel <==
	 13:59:17 up  1:41,  0 user,  load average: 2.70, 1.87, 2.45
	Linux addons-494116 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f] <==
	I1121 13:57:21.801929       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 13:57:21.802750       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 13:57:51.802079       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 13:57:51.803187       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 13:57:51.803375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 13:57:51.803473       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1121 13:57:53.402908       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 13:57:53.402939       1 metrics.go:72] Registering metrics
	I1121 13:57:53.403000       1 controller.go:711] "Syncing nftables rules"
	I1121 13:58:01.804469       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:58:01.804526       1 main.go:301] handling current node
	I1121 13:58:11.802115       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:58:11.802173       1 main.go:301] handling current node
	I1121 13:58:21.802085       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:58:21.802113       1 main.go:301] handling current node
	I1121 13:58:31.802091       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:58:31.802212       1 main.go:301] handling current node
	I1121 13:58:41.802214       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:58:41.802241       1 main.go:301] handling current node
	I1121 13:58:51.805668       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:58:51.805751       1 main.go:301] handling current node
	I1121 13:59:01.801539       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:59:01.801569       1 main.go:301] handling current node
	I1121 13:59:11.802037       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:59:11.802073       1 main.go:301] handling current node
	
	
	==> kube-apiserver [753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30] <==
	I1121 13:57:27.629088       1 controller.go:667] quota admission added evaluator for: jobs.batch
	I1121 13:57:27.892592       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.105.78.45"}
	I1121 13:57:27.903178       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1121 13:57:28.034416       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.110.132.138"}
	W1121 13:57:28.307613       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 13:57:28.323028       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1121 13:57:31.135366       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.136.6"}
	W1121 13:57:50.391540       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 13:57:50.406022       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 13:57:50.428288       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1121 13:57:50.447210       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1121 13:58:02.071259       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.136.6:443: connect: connection refused
	E1121 13:58:02.071309       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.136.6:443: connect: connection refused" logger="UnhandledError"
	W1121 13:58:02.071547       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.136.6:443: connect: connection refused
	E1121 13:58:02.071621       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.136.6:443: connect: connection refused" logger="UnhandledError"
	W1121 13:58:02.179090       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.136.6:443: connect: connection refused
	E1121 13:58:02.182647       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.136.6:443: connect: connection refused" logger="UnhandledError"
	E1121 13:58:25.446326       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.206.105:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.206.105:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.206.105:443: connect: connection refused" logger="UnhandledError"
	W1121 13:58:25.447918       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 13:58:25.449958       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1121 13:58:25.491598       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1121 13:59:14.730415       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45208: use of closed network connection
	
	
	==> kube-controller-manager [562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8] <==
	I1121 13:57:20.400056       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 13:57:20.408227       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-494116" podCIDRs=["10.244.0.0/24"]
	I1121 13:57:20.412700       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 13:57:20.412721       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 13:57:20.412729       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 13:57:20.413772       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 13:57:20.413827       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 13:57:20.414507       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 13:57:20.414664       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 13:57:20.419609       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 13:57:20.419673       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 13:57:20.420859       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 13:57:20.426152       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 13:57:20.429286       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	E1121 13:57:25.968373       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1121 13:57:50.380836       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1121 13:57:50.384895       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	E1121 13:57:50.433879       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 13:57:50.434056       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1121 13:57:50.434128       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1121 13:57:50.485349       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 13:57:50.535201       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 13:58:05.391458       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1121 13:58:20.494473       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1121 13:58:20.539515       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	
	
	==> kube-proxy [013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a] <==
	I1121 13:57:21.563156       1 server_linux.go:53] "Using iptables proxy"
	I1121 13:57:21.633773       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 13:57:21.734866       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 13:57:21.734903       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 13:57:21.735005       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 13:57:21.759019       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 13:57:21.759132       1 server_linux.go:132] "Using iptables Proxier"
	I1121 13:57:21.763104       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 13:57:21.763439       1 server.go:527] "Version info" version="v1.34.1"
	I1121 13:57:21.763508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 13:57:21.773217       1 config.go:106] "Starting endpoint slice config controller"
	I1121 13:57:21.773239       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 13:57:21.773552       1 config.go:200] "Starting service config controller"
	I1121 13:57:21.773566       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 13:57:21.773871       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 13:57:21.773885       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 13:57:21.774315       1 config.go:309] "Starting node config controller"
	I1121 13:57:21.774329       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 13:57:21.774336       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 13:57:21.875359       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 13:57:21.875499       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 13:57:21.875768       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef] <==
	E1121 13:57:13.354191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 13:57:13.354233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 13:57:13.354297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 13:57:13.354351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 13:57:13.354400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 13:57:13.358070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 13:57:13.358180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 13:57:13.358507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 13:57:13.358596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 13:57:13.358687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 13:57:13.358786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 13:57:13.358925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 13:57:13.359118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 13:57:13.359293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 13:57:13.359425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 13:57:13.359599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 13:57:14.230266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 13:57:14.263257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 13:57:14.324379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 13:57:14.333946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 13:57:14.415010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 13:57:14.481481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 13:57:14.545592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 13:57:14.549067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1121 13:57:16.435943       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 13:58:39 addons-494116 kubelet[1273]: I1121 13:58:39.810746    1273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6jkg2\" (UniqueName: \"kubernetes.io/projected/3b541795-77bf-4b0a-ac61-1a69249c1102-kube-api-access-6jkg2\") on node \"addons-494116\" DevicePath \"\""
	Nov 21 13:58:40 addons-494116 kubelet[1273]: I1121 13:58:40.576483    1273 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fbf395d6bd806bfa28c0477bc9ef34142c6ad5afd061790443ccea91c8baf10"
	Nov 21 13:58:40 addons-494116 kubelet[1273]: I1121 13:58:40.919115    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mmjn\" (UniqueName: \"kubernetes.io/projected/58b94ea5-bca2-4fec-9282-9ec1e42bb943-kube-api-access-2mmjn\") pod \"58b94ea5-bca2-4fec-9282-9ec1e42bb943\" (UID: \"58b94ea5-bca2-4fec-9282-9ec1e42bb943\") "
	Nov 21 13:58:40 addons-494116 kubelet[1273]: I1121 13:58:40.925624    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b94ea5-bca2-4fec-9282-9ec1e42bb943-kube-api-access-2mmjn" (OuterVolumeSpecName: "kube-api-access-2mmjn") pod "58b94ea5-bca2-4fec-9282-9ec1e42bb943" (UID: "58b94ea5-bca2-4fec-9282-9ec1e42bb943"). InnerVolumeSpecName "kube-api-access-2mmjn". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 21 13:58:41 addons-494116 kubelet[1273]: I1121 13:58:41.019586    1273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2mmjn\" (UniqueName: \"kubernetes.io/projected/58b94ea5-bca2-4fec-9282-9ec1e42bb943-kube-api-access-2mmjn\") on node \"addons-494116\" DevicePath \"\""
	Nov 21 13:58:41 addons-494116 kubelet[1273]: I1121 13:58:41.591666    1273 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe12046627b5a89bf652b0e740077671847a311d9f3cf951947fd195904afe7e"
	Nov 21 13:58:42 addons-494116 kubelet[1273]: I1121 13:58:42.596282    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mlm5l" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 13:58:43 addons-494116 kubelet[1273]: I1121 13:58:43.601423    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mlm5l" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 13:58:48 addons-494116 kubelet[1273]: I1121 13:58:48.660198    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-mlm5l" podStartSLOduration=8.155230232 podStartE2EDuration="46.660179623s" podCreationTimestamp="2025-11-21 13:58:02 +0000 UTC" firstStartedPulling="2025-11-21 13:58:03.462886882 +0000 UTC m=+47.659473905" lastFinishedPulling="2025-11-21 13:58:41.967836273 +0000 UTC m=+86.164423296" observedRunningTime="2025-11-21 13:58:42.62205073 +0000 UTC m=+86.818637753" watchObservedRunningTime="2025-11-21 13:58:48.660179623 +0000 UTC m=+92.856766654"
	Nov 21 13:58:53 addons-494116 kubelet[1273]: I1121 13:58:53.672674    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-mndpk" podStartSLOduration=66.07924773 podStartE2EDuration="1m27.672656564s" podCreationTimestamp="2025-11-21 13:57:26 +0000 UTC" firstStartedPulling="2025-11-21 13:58:31.541326171 +0000 UTC m=+75.737913194" lastFinishedPulling="2025-11-21 13:58:53.134734997 +0000 UTC m=+97.331322028" observedRunningTime="2025-11-21 13:58:53.672212163 +0000 UTC m=+97.868799186" watchObservedRunningTime="2025-11-21 13:58:53.672656564 +0000 UTC m=+97.869243587"
	Nov 21 13:58:53 addons-494116 kubelet[1273]: I1121 13:58:53.673143    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-2z7nm" podStartSLOduration=44.533300442 podStartE2EDuration="1m26.673135041s" podCreationTimestamp="2025-11-21 13:57:27 +0000 UTC" firstStartedPulling="2025-11-21 13:58:06.207110028 +0000 UTC m=+50.403697051" lastFinishedPulling="2025-11-21 13:58:48.346944627 +0000 UTC m=+92.543531650" observedRunningTime="2025-11-21 13:58:48.663395257 +0000 UTC m=+92.859982288" watchObservedRunningTime="2025-11-21 13:58:53.673135041 +0000 UTC m=+97.869722072"
	Nov 21 13:58:59 addons-494116 kubelet[1273]: I1121 13:58:59.181011    1273 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 21 13:58:59 addons-494116 kubelet[1273]: I1121 13:58:59.181079    1273 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 21 13:58:59 addons-494116 kubelet[1273]: I1121 13:58:59.664623    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-c7vmg" podStartSLOduration=66.68694237 podStartE2EDuration="1m28.664605703s" podCreationTimestamp="2025-11-21 13:57:31 +0000 UTC" firstStartedPulling="2025-11-21 13:58:34.225949691 +0000 UTC m=+78.422536714" lastFinishedPulling="2025-11-21 13:58:56.203613016 +0000 UTC m=+100.400200047" observedRunningTime="2025-11-21 13:58:56.6879234 +0000 UTC m=+100.884510431" watchObservedRunningTime="2025-11-21 13:58:59.664605703 +0000 UTC m=+103.861192726"
	Nov 21 13:59:02 addons-494116 kubelet[1273]: I1121 13:59:02.749794    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-l2g77" podStartSLOduration=1.829153128 podStartE2EDuration="1m0.749775621s" podCreationTimestamp="2025-11-21 13:58:02 +0000 UTC" firstStartedPulling="2025-11-21 13:58:02.969745831 +0000 UTC m=+47.166332854" lastFinishedPulling="2025-11-21 13:59:01.890368325 +0000 UTC m=+106.086955347" observedRunningTime="2025-11-21 13:59:02.738026643 +0000 UTC m=+106.934613674" watchObservedRunningTime="2025-11-21 13:59:02.749775621 +0000 UTC m=+106.946362652"
	Nov 21 13:59:05 addons-494116 kubelet[1273]: I1121 13:59:05.592222    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3076dae2-a593-4499-8efe-3f9806b2d96d-gcp-creds\") pod \"busybox\" (UID: \"3076dae2-a593-4499-8efe-3f9806b2d96d\") " pod="default/busybox"
	Nov 21 13:59:05 addons-494116 kubelet[1273]: I1121 13:59:05.592806    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk9r7\" (UniqueName: \"kubernetes.io/projected/3076dae2-a593-4499-8efe-3f9806b2d96d-kube-api-access-sk9r7\") pod \"busybox\" (UID: \"3076dae2-a593-4499-8efe-3f9806b2d96d\") " pod="default/busybox"
	Nov 21 13:59:05 addons-494116 kubelet[1273]: I1121 13:59:05.945262    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1749433d-a3d8-4258-afd6-e4a67c235fbb" path="/var/lib/kubelet/pods/1749433d-a3d8-4258-afd6-e4a67c235fbb/volumes"
	Nov 21 13:59:06 addons-494116 kubelet[1273]: E1121 13:59:06.198462    1273 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 21 13:59:06 addons-494116 kubelet[1273]: E1121 13:59:06.198562    1273 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6f343b9-9d5d-4236-a7b8-a958f297db46-gcr-creds podName:b6f343b9-9d5d-4236-a7b8-a958f297db46 nodeName:}" failed. No retries permitted until 2025-11-21 14:00:10.198543979 +0000 UTC m=+174.395131010 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/b6f343b9-9d5d-4236-a7b8-a958f297db46-gcr-creds") pod "registry-creds-764b6fb674-sl95w" (UID: "b6f343b9-9d5d-4236-a7b8-a958f297db46") : secret "registry-creds-gcr" not found
	Nov 21 13:59:08 addons-494116 kubelet[1273]: I1121 13:59:08.745349    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.700321383 podStartE2EDuration="3.745319991s" podCreationTimestamp="2025-11-21 13:59:05 +0000 UTC" firstStartedPulling="2025-11-21 13:59:05.862439362 +0000 UTC m=+110.059026385" lastFinishedPulling="2025-11-21 13:59:07.907437962 +0000 UTC m=+112.104024993" observedRunningTime="2025-11-21 13:59:08.743385136 +0000 UTC m=+112.939972158" watchObservedRunningTime="2025-11-21 13:59:08.745319991 +0000 UTC m=+112.941907013"
	Nov 21 13:59:11 addons-494116 kubelet[1273]: I1121 13:59:11.946266    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b541795-77bf-4b0a-ac61-1a69249c1102" path="/var/lib/kubelet/pods/3b541795-77bf-4b0a-ac61-1a69249c1102/volumes"
	Nov 21 13:59:15 addons-494116 kubelet[1273]: I1121 13:59:15.969343    1273 scope.go:117] "RemoveContainer" containerID="c8640cc1179116e9b0591ba8eb80d50894f50dfe49d3ebbce84e96ca60fa7f32"
	Nov 21 13:59:16 addons-494116 kubelet[1273]: I1121 13:59:16.007606    1273 scope.go:117] "RemoveContainer" containerID="042625a3156841da6ee10849f232afc929531633a0bb0e1c0385f057c271b313"
	Nov 21 13:59:16 addons-494116 kubelet[1273]: E1121 13:59:16.143258    1273 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/55b6994ec560e150bfe1131d25d0adab3cc04c7af8706cead8cc816bec571304/diff" to get inode usage: stat /var/lib/containers/storage/overlay/55b6994ec560e150bfe1131d25d0adab3cc04c7af8706cead8cc816bec571304/diff: no such file or directory, extraDiskErr: <nil>
	
	
	==> storage-provisioner [a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348] <==
	W1121 13:58:51.892033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:58:53.900233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:58:53.907746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:58:55.911848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:58:55.918060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:58:57.921251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:58:57.925573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:58:59.929490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:58:59.934623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:01.939023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:01.951280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:03.954441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:03.959193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:05.962480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:05.971793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:07.974988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:07.979558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:09.982664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:10.001061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:12.004245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:12.009386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:14.012829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:14.017729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:16.021227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:59:16.031677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-494116 -n addons-494116
helpers_test.go:269: (dbg) Run:  kubectl --context addons-494116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-lfq45 ingress-nginx-admission-patch-2v528 registry-creds-764b6fb674-sl95w
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-494116 describe pod ingress-nginx-admission-create-lfq45 ingress-nginx-admission-patch-2v528 registry-creds-764b6fb674-sl95w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-494116 describe pod ingress-nginx-admission-create-lfq45 ingress-nginx-admission-patch-2v528 registry-creds-764b6fb674-sl95w: exit status 1 (98.777172ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lfq45" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2v528" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-sl95w" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-494116 describe pod ingress-nginx-admission-create-lfq45 ingress-nginx-admission-patch-2v528 registry-creds-764b6fb674-sl95w: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable headlamp --alsologtostderr -v=1: exit status 11 (255.75758ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:59:18.509331  298334 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:59:18.510377  298334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:18.510394  298334 out.go:374] Setting ErrFile to fd 2...
	I1121 13:59:18.510401  298334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:18.510677  298334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 13:59:18.510990  298334 mustload.go:66] Loading cluster: addons-494116
	I1121 13:59:18.511405  298334 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:18.511425  298334 addons.go:622] checking whether the cluster is paused
	I1121 13:59:18.511533  298334 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:18.511549  298334 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:59:18.511989  298334 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:59:18.529842  298334 ssh_runner.go:195] Run: systemctl --version
	I1121 13:59:18.529914  298334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:59:18.548739  298334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:59:18.653729  298334 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:59:18.653851  298334 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:59:18.686711  298334 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 13:59:18.686733  298334 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 13:59:18.686738  298334 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 13:59:18.686752  298334 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 13:59:18.686757  298334 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 13:59:18.686761  298334 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 13:59:18.686764  298334 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 13:59:18.686767  298334 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 13:59:18.686771  298334 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 13:59:18.686777  298334 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 13:59:18.686785  298334 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 13:59:18.686788  298334 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 13:59:18.686792  298334 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 13:59:18.686795  298334 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 13:59:18.686798  298334 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 13:59:18.686804  298334 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 13:59:18.686809  298334 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 13:59:18.686814  298334 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 13:59:18.686818  298334 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 13:59:18.686821  298334 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 13:59:18.686832  298334 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 13:59:18.686840  298334 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 13:59:18.686843  298334 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 13:59:18.686846  298334 cri.go:89] found id: ""
	I1121 13:59:18.686895  298334 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:59:18.701640  298334 out.go:203] 
	W1121 13:59:18.704452  298334 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:59:18.704488  298334 out.go:285] * 
	* 
	W1121 13:59:18.709609  298334 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:59:18.712500  298334 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.32s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-rkw7z" [a97edd8a-a301-44d9-a3a9-0d4ae409ef8d] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003636699s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (292.003505ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:59:35.578806  298771 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:59:35.579661  298771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:35.579680  298771 out.go:374] Setting ErrFile to fd 2...
	I1121 13:59:35.579688  298771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:35.580047  298771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 13:59:35.580455  298771 mustload.go:66] Loading cluster: addons-494116
	I1121 13:59:35.581004  298771 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:35.581028  298771 addons.go:622] checking whether the cluster is paused
	I1121 13:59:35.581191  298771 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:35.581233  298771 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:59:35.581869  298771 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:59:35.599663  298771 ssh_runner.go:195] Run: systemctl --version
	I1121 13:59:35.599745  298771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:59:35.618595  298771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:59:35.719231  298771 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:59:35.719380  298771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:59:35.755148  298771 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 13:59:35.755169  298771 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 13:59:35.755173  298771 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 13:59:35.755177  298771 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 13:59:35.755185  298771 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 13:59:35.755189  298771 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 13:59:35.755193  298771 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 13:59:35.755196  298771 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 13:59:35.755199  298771 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 13:59:35.755209  298771 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 13:59:35.755213  298771 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 13:59:35.755216  298771 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 13:59:35.755219  298771 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 13:59:35.755222  298771 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 13:59:35.755225  298771 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 13:59:35.755230  298771 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 13:59:35.755233  298771 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 13:59:35.755238  298771 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 13:59:35.755241  298771 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 13:59:35.755244  298771 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 13:59:35.755249  298771 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 13:59:35.755252  298771 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 13:59:35.755255  298771 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 13:59:35.755258  298771 cri.go:89] found id: ""
	I1121 13:59:35.755314  298771 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:59:35.794207  298771 out.go:203] 
	W1121 13:59:35.797832  298771 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:59:35.797861  298771 out.go:285] * 
	* 
	W1121 13:59:35.802986  298771 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:59:35.805958  298771 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-494116 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-494116 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-494116 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [6fbe66fe-586f-4e52-937b-1a2cf44da778] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [6fbe66fe-586f-4e52-937b-1a2cf44da778] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [6fbe66fe-586f-4e52-937b-1a2cf44da778] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00390999s
addons_test.go:967: (dbg) Run:  kubectl --context addons-494116 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 ssh "cat /opt/local-path-provisioner/pvc-26d6969d-2083-4cb8-a3a0-2581439214de_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-494116 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-494116 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (275.089851ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:59:39.621931  298975 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:59:39.623057  298975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:39.623110  298975 out.go:374] Setting ErrFile to fd 2...
	I1121 13:59:39.623133  298975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:39.623433  298975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 13:59:39.623766  298975 mustload.go:66] Loading cluster: addons-494116
	I1121 13:59:39.624217  298975 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:39.624266  298975 addons.go:622] checking whether the cluster is paused
	I1121 13:59:39.624431  298975 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:39.624469  298975 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:59:39.624964  298975 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:59:39.645154  298975 ssh_runner.go:195] Run: systemctl --version
	I1121 13:59:39.645230  298975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:59:39.665455  298975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:59:39.768120  298975 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:59:39.768197  298975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:59:39.806127  298975 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 13:59:39.806148  298975 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 13:59:39.806156  298975 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 13:59:39.806160  298975 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 13:59:39.806168  298975 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 13:59:39.806172  298975 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 13:59:39.806175  298975 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 13:59:39.806178  298975 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 13:59:39.806181  298975 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 13:59:39.806188  298975 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 13:59:39.806191  298975 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 13:59:39.806194  298975 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 13:59:39.806198  298975 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 13:59:39.806201  298975 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 13:59:39.806204  298975 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 13:59:39.806210  298975 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 13:59:39.806217  298975 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 13:59:39.806223  298975 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 13:59:39.806227  298975 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 13:59:39.806230  298975 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 13:59:39.806234  298975 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 13:59:39.806242  298975 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 13:59:39.806245  298975 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 13:59:39.806248  298975 cri.go:89] found id: ""
	I1121 13:59:39.806299  298975 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:59:39.821849  298975 out.go:203] 
	W1121 13:59:39.824933  298975 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:59:39.824959  298975 out.go:285] * 
	* 
	W1121 13:59:39.830011  298975 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:59:39.833580  298975 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.41s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-tkkkl" [8f752345-52f8-4288-8728-33e535a60746] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.009712636s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable nvidia-device-plugin --alsologtostderr -v=1
2025/11/21 13:59:30 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (402.421225ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:59:30.157312  298545 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:59:30.158241  298545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:30.158289  298545 out.go:374] Setting ErrFile to fd 2...
	I1121 13:59:30.158314  298545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:30.159447  298545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 13:59:30.161003  298545 mustload.go:66] Loading cluster: addons-494116
	I1121 13:59:30.161879  298545 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:30.161971  298545 addons.go:622] checking whether the cluster is paused
	I1121 13:59:30.162235  298545 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:30.162315  298545 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:59:30.163039  298545 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:59:30.199226  298545 ssh_runner.go:195] Run: systemctl --version
	I1121 13:59:30.199282  298545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:59:30.235314  298545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:59:30.337611  298545 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:59:30.337704  298545 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:59:30.375983  298545 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 13:59:30.376002  298545 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 13:59:30.376007  298545 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 13:59:30.376011  298545 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 13:59:30.376014  298545 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 13:59:30.376019  298545 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 13:59:30.376022  298545 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 13:59:30.376025  298545 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 13:59:30.376029  298545 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 13:59:30.376043  298545 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 13:59:30.376047  298545 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 13:59:30.376050  298545 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 13:59:30.376053  298545 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 13:59:30.376056  298545 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 13:59:30.376059  298545 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 13:59:30.376068  298545 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 13:59:30.376071  298545 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 13:59:30.376076  298545 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 13:59:30.376079  298545 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 13:59:30.376083  298545 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 13:59:30.376091  298545 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 13:59:30.376094  298545 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 13:59:30.376097  298545 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 13:59:30.376100  298545 cri.go:89] found id: ""
	I1121 13:59:30.376149  298545 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:59:30.393546  298545 out.go:203] 
	W1121 13:59:30.396338  298545 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:59:30.396356  298545 out.go:285] * 
	* 
	W1121 13:59:30.401829  298545 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:59:30.404767  298545 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.41s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-7w57n" [9a798abc-4639-4211-9eb1-8f65bc2e1f47] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004290709s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-494116 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-494116 addons disable yakd --alsologtostderr -v=1: exit status 11 (272.545376ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:59:23.779255  298399 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:59:23.779995  298399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:23.780010  298399 out.go:374] Setting ErrFile to fd 2...
	I1121 13:59:23.780014  298399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:59:23.780294  298399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 13:59:23.780629  298399 mustload.go:66] Loading cluster: addons-494116
	I1121 13:59:23.780990  298399 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:23.781007  298399 addons.go:622] checking whether the cluster is paused
	I1121 13:59:23.781109  298399 config.go:182] Loaded profile config "addons-494116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:59:23.781123  298399 host.go:66] Checking if "addons-494116" exists ...
	I1121 13:59:23.781552  298399 cli_runner.go:164] Run: docker container inspect addons-494116 --format={{.State.Status}}
	I1121 13:59:23.810921  298399 ssh_runner.go:195] Run: systemctl --version
	I1121 13:59:23.810991  298399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-494116
	I1121 13:59:23.828514  298399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/addons-494116/id_rsa Username:docker}
	I1121 13:59:23.927344  298399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:59:23.927428  298399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:59:23.957160  298399 cri.go:89] found id: "e4320f5fe88952421f37289bbe5229f5cb8f5c70de62f21f52045600157afd04"
	I1121 13:59:23.957181  298399 cri.go:89] found id: "22aad46f46903732cb76b5a68cc28d8766b917439f3cc97ef34dbfbf6b90e1bb"
	I1121 13:59:23.957186  298399 cri.go:89] found id: "34ffe03bcd4d1fe2b5fb70358068906d29243b4b0243f2533413f7ab515b389e"
	I1121 13:59:23.957190  298399 cri.go:89] found id: "c5469a211994ed4f5c3864f62727521ef5b1b61341636439dcf58b2783e96ac7"
	I1121 13:59:23.957193  298399 cri.go:89] found id: "4298f174eb879cb95999d72049d6abda4f0aea8243f1c1fbcbff04dedc12815c"
	I1121 13:59:23.957197  298399 cri.go:89] found id: "1e877b39bef0841e70e37a8fe76d3afb9d15eab014215c1e5b8cbbbf980ec980"
	I1121 13:59:23.957205  298399 cri.go:89] found id: "5d49e8d42c411b848293fc83955688c55b19be9e9c85457c7ed751cf46d6968b"
	I1121 13:59:23.957209  298399 cri.go:89] found id: "3c55ac84412c87c71ab05728b7dd25e9fb060bea9e7c43fca8de12671d9e03ad"
	I1121 13:59:23.957213  298399 cri.go:89] found id: "3e9d7de7df80ea3e9b60faecbbf9af12490243b75bebe99963ad5cbb2b473aa0"
	I1121 13:59:23.957219  298399 cri.go:89] found id: "15f09ce47d75a056a5aa68aeba2f67e8119d96e898ee4f1755d28c3de858e35d"
	I1121 13:59:23.957227  298399 cri.go:89] found id: "61d5ed18a54c65cfc0a7ff1fb073070036b154e975cdabc5e0c29a34958babfa"
	I1121 13:59:23.957231  298399 cri.go:89] found id: "0ac555261f857b219fa4a08069009939f2c15241b15fa88774b6700276588005"
	I1121 13:59:23.957234  298399 cri.go:89] found id: "f601cd1551b2652eafc3ba02419cc3f1487f76c3b849e06ebf553983b88703f7"
	I1121 13:59:23.957237  298399 cri.go:89] found id: "de59a0296292662ff64682d92fd9696ee4d5bf45b88bedc21ec54c0f9ce72813"
	I1121 13:59:23.957240  298399 cri.go:89] found id: "3c3896dadd82def4cf2a10ee995992786655b3c6428bb5a7fe2b6a0d86bad1f4"
	I1121 13:59:23.957245  298399 cri.go:89] found id: "a443f1743ed06dbe7a147db4e6bc8fc1feb4f64a8ade2bd1e439b2a20d073348"
	I1121 13:59:23.957251  298399 cri.go:89] found id: "6fa60b05394e1798fe8567892cac909ebb562fe265e240c58cbf14929dfb7c7a"
	I1121 13:59:23.957255  298399 cri.go:89] found id: "d401871bd196ab6d0ad066567cc47174b8f26e415ee78af0ab91b569d4691b6f"
	I1121 13:59:23.957258  298399 cri.go:89] found id: "013fd680426166b56fc25326598c8ee2f65a14fd8b96981903e0d7d440dcf65a"
	I1121 13:59:23.957261  298399 cri.go:89] found id: "562af98fdae9f5b2250156a4e11858bf961a21a2d7a939d801db55c398cc27e8"
	I1121 13:59:23.957265  298399 cri.go:89] found id: "870089e2cb7cff0f4228b185bc7b35905bf35f0ef9d15cb28054e946396e33ef"
	I1121 13:59:23.957269  298399 cri.go:89] found id: "753f8d0dbe26a43474118c7103ed6ab8444a196f76801381d01ab932ccefae30"
	I1121 13:59:23.957272  298399 cri.go:89] found id: "1b81e667338031c4ea221740a109e522a8ee3f96820d01c19a1a1e28ce4eada7"
	I1121 13:59:23.957275  298399 cri.go:89] found id: ""
	I1121 13:59:23.957327  298399 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:59:23.972853  298399 out.go:203] 
	W1121 13:59:23.975707  298399 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:59:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:59:23.975727  298399 out.go:285] * 
	* 
	W1121 13:59:23.982420  298399 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:59:23.987960  298399 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-494116 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-939098 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-939098 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-6q5rv" [53032433-3188-46f0-be28-a1186d880574] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-939098 -n functional-939098
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-21 14:16:22.732251615 +0000 UTC m=+1220.215231995
functional_test.go:1645: (dbg) Run:  kubectl --context functional-939098 describe po hello-node-connect-7d85dfc575-6q5rv -n default
functional_test.go:1645: (dbg) kubectl --context functional-939098 describe po hello-node-connect-7d85dfc575-6q5rv -n default:
Name:             hello-node-connect-7d85dfc575-6q5rv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-939098/192.168.49.2
Start Time:       Fri, 21 Nov 2025 14:06:22 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z74zh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-z74zh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6q5rv to functional-939098
Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-939098 logs hello-node-connect-7d85dfc575-6q5rv -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-939098 logs hello-node-connect-7d85dfc575-6q5rv -n default: exit status 1 (101.949349ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6q5rv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-939098 logs hello-node-connect-7d85dfc575-6q5rv -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-939098 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-6q5rv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-939098/192.168.49.2
Start Time:       Fri, 21 Nov 2025 14:06:22 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z74zh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-z74zh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6q5rv to functional-939098
Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-939098 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-939098 logs -l app=hello-node-connect: exit status 1 (110.812124ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6q5rv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-939098 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-939098 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.216.122
IPs:                      10.102.216.122
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30982/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-939098
helpers_test.go:243: (dbg) docker inspect functional-939098:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b3967b76f881637cc86b07943839ed1b0f2b7860ee8ddc4b24a85e59ae20b59",
	        "Created": "2025-11-21T14:03:24.820315252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306485,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:03:24.891046853Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/4b3967b76f881637cc86b07943839ed1b0f2b7860ee8ddc4b24a85e59ae20b59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b3967b76f881637cc86b07943839ed1b0f2b7860ee8ddc4b24a85e59ae20b59/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b3967b76f881637cc86b07943839ed1b0f2b7860ee8ddc4b24a85e59ae20b59/hosts",
	        "LogPath": "/var/lib/docker/containers/4b3967b76f881637cc86b07943839ed1b0f2b7860ee8ddc4b24a85e59ae20b59/4b3967b76f881637cc86b07943839ed1b0f2b7860ee8ddc4b24a85e59ae20b59-json.log",
	        "Name": "/functional-939098",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-939098:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-939098",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b3967b76f881637cc86b07943839ed1b0f2b7860ee8ddc4b24a85e59ae20b59",
	                "LowerDir": "/var/lib/docker/overlay2/b44aea2354c448d8f33255528e321a68663788a15b0dada625d9fc27d22b3eab-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b44aea2354c448d8f33255528e321a68663788a15b0dada625d9fc27d22b3eab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b44aea2354c448d8f33255528e321a68663788a15b0dada625d9fc27d22b3eab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b44aea2354c448d8f33255528e321a68663788a15b0dada625d9fc27d22b3eab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-939098",
	                "Source": "/var/lib/docker/volumes/functional-939098/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-939098",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-939098",
	                "name.minikube.sigs.k8s.io": "functional-939098",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44b0e1edde12c2d0240d8c831d0bd6fa67ffe6e2f8f9ff5f72191367577ac1eb",
	            "SandboxKey": "/var/run/docker/netns/44b0e1edde12",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-939098": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:f6:ec:d8:68:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "67185cb0971f14a8873eeecf109701a9be277c40667daa5971d13364c2f8e0b8",
	                    "EndpointID": "33ec41ae17ea9a1a636ca77051ac4aae28a8551263a4e07f7d9a23e554c4510c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-939098",
	                        "4b3967b76f88"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-939098 -n functional-939098
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-939098 logs -n 25: (1.475563901s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-939098 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:05 UTC │ 21 Nov 25 14:05 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 21 Nov 25 14:05 UTC │ 21 Nov 25 14:05 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 21 Nov 25 14:05 UTC │ 21 Nov 25 14:05 UTC │
	│ kubectl │ functional-939098 kubectl -- --context functional-939098 get pods                                                          │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:05 UTC │ 21 Nov 25 14:05 UTC │
	│ start   │ -p functional-939098 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:05 UTC │ 21 Nov 25 14:06 UTC │
	│ service │ invalid-svc -p functional-939098                                                                                           │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │                     │
	│ cp      │ functional-939098 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ config  │ functional-939098 config unset cpus                                                                                        │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ config  │ functional-939098 config get cpus                                                                                          │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │                     │
	│ config  │ functional-939098 config set cpus 2                                                                                        │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ config  │ functional-939098 config get cpus                                                                                          │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ config  │ functional-939098 config unset cpus                                                                                        │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ ssh     │ functional-939098 ssh -n functional-939098 sudo cat /home/docker/cp-test.txt                                               │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ config  │ functional-939098 config get cpus                                                                                          │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │                     │
	│ ssh     │ functional-939098 ssh echo hello                                                                                           │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ cp      │ functional-939098 cp functional-939098:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2222964975/001/cp-test.txt │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ ssh     │ functional-939098 ssh cat /etc/hostname                                                                                    │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ ssh     │ functional-939098 ssh -n functional-939098 sudo cat /home/docker/cp-test.txt                                               │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ tunnel  │ functional-939098 tunnel --alsologtostderr                                                                                 │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │                     │
	│ tunnel  │ functional-939098 tunnel --alsologtostderr                                                                                 │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │                     │
	│ cp      │ functional-939098 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ tunnel  │ functional-939098 tunnel --alsologtostderr                                                                                 │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │                     │
	│ ssh     │ functional-939098 ssh -n functional-939098 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ addons  │ functional-939098 addons list                                                                                              │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	│ addons  │ functional-939098 addons list -o json                                                                                      │ functional-939098 │ jenkins │ v1.37.0 │ 21 Nov 25 14:06 UTC │ 21 Nov 25 14:06 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:05:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:05:25.436717  310831 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:05:25.436814  310831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:05:25.436818  310831 out.go:374] Setting ErrFile to fd 2...
	I1121 14:05:25.436822  310831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:05:25.437169  310831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:05:25.437998  310831 out.go:368] Setting JSON to false
	I1121 14:05:25.438892  310831 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6477,"bootTime":1763727448,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 14:05:25.438986  310831 start.go:143] virtualization:  
	I1121 14:05:25.442607  310831 out.go:179] * [functional-939098] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:05:25.446432  310831 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:05:25.446550  310831 notify.go:221] Checking for updates...
	I1121 14:05:25.452467  310831 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:05:25.455433  310831 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:05:25.458312  310831 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 14:05:25.461179  310831 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:05:25.464121  310831 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:05:25.467642  310831 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:05:25.467777  310831 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:05:25.507728  310831 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:05:25.507856  310831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:05:25.574278  310831 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-21 14:05:25.560987808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:05:25.574373  310831 docker.go:319] overlay module found
	I1121 14:05:25.577450  310831 out.go:179] * Using the docker driver based on existing profile
	I1121 14:05:25.580370  310831 start.go:309] selected driver: docker
	I1121 14:05:25.580379  310831 start.go:930] validating driver "docker" against &{Name:functional-939098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-939098 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:05:25.580522  310831 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:05:25.580629  310831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:05:25.636821  310831 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-21 14:05:25.627538098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:05:25.637220  310831 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:05:25.637245  310831 cni.go:84] Creating CNI manager for ""
	I1121 14:05:25.637295  310831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:05:25.637334  310831 start.go:353] cluster config:
	{Name:functional-939098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-939098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:05:25.640539  310831 out.go:179] * Starting "functional-939098" primary control-plane node in "functional-939098" cluster
	I1121 14:05:25.643436  310831 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:05:25.646426  310831 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:05:25.649298  310831 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:05:25.649338  310831 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 14:05:25.649346  310831 cache.go:65] Caching tarball of preloaded images
	I1121 14:05:25.649359  310831 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:05:25.649430  310831 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 14:05:25.649439  310831 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:05:25.649550  310831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/config.json ...
	I1121 14:05:25.669189  310831 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:05:25.669200  310831 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:05:25.669221  310831 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:05:25.669244  310831 start.go:360] acquireMachinesLock for functional-939098: {Name:mka7d370f0e68583632870e207c5e4a7670de860 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:05:25.669317  310831 start.go:364] duration metric: took 54.344µs to acquireMachinesLock for "functional-939098"
	I1121 14:05:25.669340  310831 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:05:25.669345  310831 fix.go:54] fixHost starting: 
	I1121 14:05:25.669604  310831 cli_runner.go:164] Run: docker container inspect functional-939098 --format={{.State.Status}}
	I1121 14:05:25.686836  310831 fix.go:112] recreateIfNeeded on functional-939098: state=Running err=<nil>
	W1121 14:05:25.686857  310831 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:05:25.689998  310831 out.go:252] * Updating the running docker "functional-939098" container ...
	I1121 14:05:25.690020  310831 machine.go:94] provisionDockerMachine start ...
	I1121 14:05:25.690100  310831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
	I1121 14:05:25.708089  310831 main.go:143] libmachine: Using SSH client type: native
	I1121 14:05:25.708445  310831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1121 14:05:25.708452  310831 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:05:25.847964  310831 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-939098
	
	I1121 14:05:25.847979  310831 ubuntu.go:182] provisioning hostname "functional-939098"
	I1121 14:05:25.848051  310831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
	I1121 14:05:25.866590  310831 main.go:143] libmachine: Using SSH client type: native
	I1121 14:05:25.866894  310831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1121 14:05:25.866903  310831 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-939098 && echo "functional-939098" | sudo tee /etc/hostname
	I1121 14:05:26.018618  310831 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-939098
	
	I1121 14:05:26.018684  310831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
	I1121 14:05:26.037741  310831 main.go:143] libmachine: Using SSH client type: native
	I1121 14:05:26.038038  310831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1121 14:05:26.038052  310831 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-939098' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-939098/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-939098' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:05:26.188840  310831 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:05:26.188856  310831 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 14:05:26.188876  310831 ubuntu.go:190] setting up certificates
	I1121 14:05:26.188885  310831 provision.go:84] configureAuth start
	I1121 14:05:26.188949  310831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-939098
	I1121 14:05:26.206703  310831 provision.go:143] copyHostCerts
	I1121 14:05:26.206759  310831 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 14:05:26.206774  310831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 14:05:26.206901  310831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 14:05:26.207009  310831 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 14:05:26.207013  310831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 14:05:26.207041  310831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 14:05:26.207089  310831 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 14:05:26.207097  310831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 14:05:26.207119  310831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 14:05:26.207162  310831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.functional-939098 san=[127.0.0.1 192.168.49.2 functional-939098 localhost minikube]
	I1121 14:05:26.378322  310831 provision.go:177] copyRemoteCerts
	I1121 14:05:26.378380  310831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:05:26.378421  310831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
	I1121 14:05:26.395393  310831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
	I1121 14:05:26.496377  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:05:26.519586  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:05:26.539863  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:05:26.558090  310831 provision.go:87] duration metric: took 369.19193ms to configureAuth
	I1121 14:05:26.558107  310831 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:05:26.558306  310831 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:05:26.558409  310831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
	I1121 14:05:26.576254  310831 main.go:143] libmachine: Using SSH client type: native
	I1121 14:05:26.576633  310831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1121 14:05:26.576646  310831 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:05:31.978381  310831 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:05:31.978395  310831 machine.go:97] duration metric: took 6.288367569s to provisionDockerMachine
	I1121 14:05:31.978404  310831 start.go:293] postStartSetup for "functional-939098" (driver="docker")
	I1121 14:05:31.978414  310831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:05:31.978489  310831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:05:31.978533  310831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
	I1121 14:05:32.004449  310831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
	I1121 14:05:32.108853  310831 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:05:32.112288  310831 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:05:32.112310  310831 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:05:32.112328  310831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 14:05:32.112414  310831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 14:05:32.112493  310831 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 14:05:32.112575  310831 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/test/nested/copy/291060/hosts -> hosts in /etc/test/nested/copy/291060
	I1121 14:05:32.112618  310831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/291060
	I1121 14:05:32.125085  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:05:32.146008  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/test/nested/copy/291060/hosts --> /etc/test/nested/copy/291060/hosts (40 bytes)
	I1121 14:05:32.163914  310831 start.go:296] duration metric: took 185.493438ms for postStartSetup
	I1121 14:05:32.164007  310831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:05:32.164048  310831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
	I1121 14:05:32.181735  310831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
	I1121 14:05:32.277517  310831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:05:32.282391  310831 fix.go:56] duration metric: took 6.613038101s for fixHost
	I1121 14:05:32.282406  310831 start.go:83] releasing machines lock for "functional-939098", held for 6.613082123s
	I1121 14:05:32.282482  310831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-939098
	I1121 14:05:32.299417  310831 ssh_runner.go:195] Run: cat /version.json
	I1121 14:05:32.299458  310831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
	I1121 14:05:32.299724  310831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:05:32.299767  310831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
	I1121 14:05:32.324897  310831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
	I1121 14:05:32.326316  310831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
	I1121 14:05:32.420304  310831 ssh_runner.go:195] Run: systemctl --version
	I1121 14:05:32.547062  310831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:05:32.585144  310831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:05:32.589426  310831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:05:32.589487  310831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:05:32.597557  310831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:05:32.597572  310831 start.go:496] detecting cgroup driver to use...
	I1121 14:05:32.597604  310831 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:05:32.597652  310831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:05:32.613663  310831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:05:32.627370  310831 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:05:32.627438  310831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:05:32.643591  310831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:05:32.658224  310831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:05:32.792444  310831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:05:32.924261  310831 docker.go:234] disabling docker service ...
	I1121 14:05:32.924318  310831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:05:32.940112  310831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:05:32.954655  310831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:05:33.102421  310831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:05:33.249430  310831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:05:33.262282  310831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:05:33.277462  310831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:05:33.277523  310831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:05:33.286813  310831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 14:05:33.286881  310831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:05:33.296239  310831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:05:33.305524  310831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:05:33.314951  310831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:05:33.324068  310831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:05:33.333515  310831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:05:33.342451  310831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:05:33.351998  310831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:05:33.360443  310831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:05:33.368279  310831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:05:33.510341  310831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:05:41.456556  310831 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.946192552s)
	I1121 14:05:41.456573  310831 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:05:41.456638  310831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:05:41.460907  310831 start.go:564] Will wait 60s for crictl version
	I1121 14:05:41.460963  310831 ssh_runner.go:195] Run: which crictl
	I1121 14:05:41.464658  310831 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:05:41.488379  310831 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:05:41.488533  310831 ssh_runner.go:195] Run: crio --version
	I1121 14:05:41.517652  310831 ssh_runner.go:195] Run: crio --version
	I1121 14:05:41.550047  310831 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:05:41.553026  310831 cli_runner.go:164] Run: docker network inspect functional-939098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:05:41.569633  310831 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1121 14:05:41.576627  310831 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1121 14:05:41.579675  310831 kubeadm.go:884] updating cluster {Name:functional-939098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-939098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:05:41.579798  310831 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:05:41.579864  310831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:05:41.618584  310831 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:05:41.618596  310831 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:05:41.618663  310831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:05:41.645258  310831 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:05:41.645270  310831 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:05:41.645276  310831 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1121 14:05:41.645379  310831 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-939098 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-939098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:05:41.645461  310831 ssh_runner.go:195] Run: crio config
	I1121 14:05:41.699252  310831 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1121 14:05:41.699272  310831 cni.go:84] Creating CNI manager for ""
	I1121 14:05:41.699281  310831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:05:41.699296  310831 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:05:41.699347  310831 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-939098 NodeName:functional-939098 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:05:41.699477  310831 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-939098"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:05:41.699546  310831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:05:41.707376  310831 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:05:41.707436  310831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:05:41.714813  310831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1121 14:05:41.727416  310831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:05:41.740070  310831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1121 14:05:41.753190  310831 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:05:41.757227  310831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:05:41.887945  310831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:05:41.901732  310831 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098 for IP: 192.168.49.2
	I1121 14:05:41.901743  310831 certs.go:195] generating shared ca certs ...
	I1121 14:05:41.901757  310831 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:05:41.901901  310831 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 14:05:41.901944  310831 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 14:05:41.901950  310831 certs.go:257] generating profile certs ...
	I1121 14:05:41.902042  310831 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.key
	I1121 14:05:41.902091  310831 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/apiserver.key.f0077fd0
	I1121 14:05:41.902126  310831 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/proxy-client.key
	I1121 14:05:41.902240  310831 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 14:05:41.902268  310831 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 14:05:41.902275  310831 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:05:41.902298  310831 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:05:41.902322  310831 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:05:41.902345  310831 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 14:05:41.902387  310831 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:05:41.902962  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:05:41.924299  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:05:41.943605  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:05:41.962429  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:05:41.981114  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:05:42.008549  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:05:42.031359  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:05:42.051246  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:05:42.073353  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:05:42.096636  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 14:05:42.121450  310831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 14:05:42.144551  310831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:05:42.161329  310831 ssh_runner.go:195] Run: openssl version
	I1121 14:05:42.169338  310831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 14:05:42.180293  310831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 14:05:42.186312  310831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 14:05:42.186386  310831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 14:05:42.232886  310831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 14:05:42.245100  310831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 14:05:42.268107  310831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 14:05:42.274517  310831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 14:05:42.274571  310831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 14:05:42.342017  310831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:05:42.367617  310831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:05:42.381509  310831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:05:42.386813  310831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:05:42.386868  310831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:05:42.431527  310831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:05:42.440046  310831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:05:42.444548  310831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:05:42.491031  310831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:05:42.532628  310831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:05:42.578563  310831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:05:42.620504  310831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:05:42.662115  310831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:05:42.706466  310831 kubeadm.go:401] StartCluster: {Name:functional-939098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-939098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:05:42.706550  310831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:05:42.706624  310831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:05:42.735540  310831 cri.go:89] found id: "caabed561250488e166ecd9d43923e15db4c0da68ad344dd7f4e26c4c6fe8fde"
	I1121 14:05:42.735551  310831 cri.go:89] found id: "3b3b4dc824663b18b3ef12853b120588e1884b02009c47d4bbba8c8e6c9b432a"
	I1121 14:05:42.735555  310831 cri.go:89] found id: "8f734ee18dadec024283ae9efb7c7b2f35b1a3891e771a44c22856fb2546abf7"
	I1121 14:05:42.735557  310831 cri.go:89] found id: "43c58796ec353fb49ee8fc91891e185dd87a2de343b3a43ae1d877fd07262273"
	I1121 14:05:42.735560  310831 cri.go:89] found id: "c3bade27638cdc4ff0105dc17e27a8bd698b2c22bbe19953668199e1f8f8596d"
	I1121 14:05:42.735562  310831 cri.go:89] found id: "831c757ba7a49d3dcac00c638809bf6d28cb3e3a5272cfaf3853ed06f1e9ef4c"
	I1121 14:05:42.735564  310831 cri.go:89] found id: "8f2a97a4b73b2a02270921862e38e64c3b78cd8740d3b346e5dd52f05bd381d7"
	I1121 14:05:42.735567  310831 cri.go:89] found id: "8b9e255f64e208207a7054efda4e0af9b8850b584894a06c1c5a11a22c94dd5a"
	I1121 14:05:42.735569  310831 cri.go:89] found id: "b1530f666cc783647c2b2f17785dd6827fc187baa753819f92c0d14568c3d38a"
	I1121 14:05:42.735574  310831 cri.go:89] found id: ""
	I1121 14:05:42.735632  310831 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:05:42.746653  310831 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:05:42Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:05:42.746723  310831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:05:42.754923  310831 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:05:42.754944  310831 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:05:42.754998  310831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:05:42.763003  310831 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:05:42.763523  310831 kubeconfig.go:125] found "functional-939098" server: "https://192.168.49.2:8441"
	I1121 14:05:42.764816  310831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:05:42.773017  310831 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-21 14:03:34.973558387 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-21 14:05:41.748119052 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1121 14:05:42.773036  310831 kubeadm.go:1161] stopping kube-system containers ...
	I1121 14:05:42.773046  310831 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1121 14:05:42.773100  310831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:05:42.800898  310831 cri.go:89] found id: "caabed561250488e166ecd9d43923e15db4c0da68ad344dd7f4e26c4c6fe8fde"
	I1121 14:05:42.800910  310831 cri.go:89] found id: "3b3b4dc824663b18b3ef12853b120588e1884b02009c47d4bbba8c8e6c9b432a"
	I1121 14:05:42.800913  310831 cri.go:89] found id: "8f734ee18dadec024283ae9efb7c7b2f35b1a3891e771a44c22856fb2546abf7"
	I1121 14:05:42.800916  310831 cri.go:89] found id: "43c58796ec353fb49ee8fc91891e185dd87a2de343b3a43ae1d877fd07262273"
	I1121 14:05:42.800927  310831 cri.go:89] found id: "c3bade27638cdc4ff0105dc17e27a8bd698b2c22bbe19953668199e1f8f8596d"
	I1121 14:05:42.800930  310831 cri.go:89] found id: "831c757ba7a49d3dcac00c638809bf6d28cb3e3a5272cfaf3853ed06f1e9ef4c"
	I1121 14:05:42.800932  310831 cri.go:89] found id: "8f2a97a4b73b2a02270921862e38e64c3b78cd8740d3b346e5dd52f05bd381d7"
	I1121 14:05:42.800935  310831 cri.go:89] found id: "8b9e255f64e208207a7054efda4e0af9b8850b584894a06c1c5a11a22c94dd5a"
	I1121 14:05:42.800937  310831 cri.go:89] found id: "b1530f666cc783647c2b2f17785dd6827fc187baa753819f92c0d14568c3d38a"
	I1121 14:05:42.800943  310831 cri.go:89] found id: ""
	I1121 14:05:42.800947  310831 cri.go:252] Stopping containers: [caabed561250488e166ecd9d43923e15db4c0da68ad344dd7f4e26c4c6fe8fde 3b3b4dc824663b18b3ef12853b120588e1884b02009c47d4bbba8c8e6c9b432a 8f734ee18dadec024283ae9efb7c7b2f35b1a3891e771a44c22856fb2546abf7 43c58796ec353fb49ee8fc91891e185dd87a2de343b3a43ae1d877fd07262273 c3bade27638cdc4ff0105dc17e27a8bd698b2c22bbe19953668199e1f8f8596d 831c757ba7a49d3dcac00c638809bf6d28cb3e3a5272cfaf3853ed06f1e9ef4c 8f2a97a4b73b2a02270921862e38e64c3b78cd8740d3b346e5dd52f05bd381d7 8b9e255f64e208207a7054efda4e0af9b8850b584894a06c1c5a11a22c94dd5a b1530f666cc783647c2b2f17785dd6827fc187baa753819f92c0d14568c3d38a]
	I1121 14:05:42.801002  310831 ssh_runner.go:195] Run: which crictl
	I1121 14:05:42.804691  310831 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 caabed561250488e166ecd9d43923e15db4c0da68ad344dd7f4e26c4c6fe8fde 3b3b4dc824663b18b3ef12853b120588e1884b02009c47d4bbba8c8e6c9b432a 8f734ee18dadec024283ae9efb7c7b2f35b1a3891e771a44c22856fb2546abf7 43c58796ec353fb49ee8fc91891e185dd87a2de343b3a43ae1d877fd07262273 c3bade27638cdc4ff0105dc17e27a8bd698b2c22bbe19953668199e1f8f8596d 831c757ba7a49d3dcac00c638809bf6d28cb3e3a5272cfaf3853ed06f1e9ef4c 8f2a97a4b73b2a02270921862e38e64c3b78cd8740d3b346e5dd52f05bd381d7 8b9e255f64e208207a7054efda4e0af9b8850b584894a06c1c5a11a22c94dd5a b1530f666cc783647c2b2f17785dd6827fc187baa753819f92c0d14568c3d38a
	I1121 14:05:42.878386  310831 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1121 14:05:42.996178  310831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:05:43.004983  310831 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 21 14:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 21 14:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 21 14:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov 21 14:03 /etc/kubernetes/scheduler.conf
	
	I1121 14:05:43.005049  310831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1121 14:05:43.013957  310831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1121 14:05:43.022112  310831 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:05:43.022167  310831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:05:43.030000  310831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1121 14:05:43.038411  310831 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:05:43.038473  310831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:05:43.046535  310831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1121 14:05:43.054746  310831 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:05:43.054804  310831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:05:43.062779  310831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:05:43.071515  310831 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1121 14:05:43.120492  310831 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1121 14:05:45.758805  310831 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.638287527s)
	I1121 14:05:45.758870  310831 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1121 14:05:45.982444  310831 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1121 14:05:46.045987  310831 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1121 14:05:46.113304  310831 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:05:46.113380  310831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:05:46.614336  310831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:05:47.114182  310831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:05:47.126626  310831 api_server.go:72] duration metric: took 1.013331649s to wait for apiserver process to appear ...
	I1121 14:05:47.126640  310831 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:05:47.126658  310831 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 14:05:51.107539  310831 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1121 14:05:51.107565  310831 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1121 14:05:51.107578  310831 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 14:05:51.354421  310831 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:05:51.354440  310831 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:05:51.354454  310831 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 14:05:51.362979  310831 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:05:51.362994  310831 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:05:51.627366  310831 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 14:05:51.635643  310831 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:05:51.635672  310831 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:05:52.126801  310831 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 14:05:52.148243  310831 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:05:52.148262  310831 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:05:52.626771  310831 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 14:05:52.635482  310831 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1121 14:05:52.650698  310831 api_server.go:141] control plane version: v1.34.1
	I1121 14:05:52.650714  310831 api_server.go:131] duration metric: took 5.524069023s to wait for apiserver health ...
	I1121 14:05:52.650722  310831 cni.go:84] Creating CNI manager for ""
	I1121 14:05:52.650728  310831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:05:52.654310  310831 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:05:52.657373  310831 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:05:52.661587  310831 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:05:52.661597  310831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:05:52.675947  310831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:05:53.169459  310831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:05:53.174079  310831 system_pods.go:59] 8 kube-system pods found
	I1121 14:05:53.174107  310831 system_pods.go:61] "coredns-66bc5c9577-z9ns8" [5042f506-7e65-423e-85d5-591810b00b9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:05:53.174122  310831 system_pods.go:61] "etcd-functional-939098" [ead87003-6005-4fd3-9651-96e52be4c390] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:05:53.174132  310831 system_pods.go:61] "kindnet-r6rl2" [5f407632-ca67-4b54-9ca8-332530fa5b4c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1121 14:05:53.174139  310831 system_pods.go:61] "kube-apiserver-functional-939098" [fa173df2-7a4e-44f4-8a1c-aa866156b83a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:05:53.174150  310831 system_pods.go:61] "kube-controller-manager-functional-939098" [e9fabac5-1ea4-4d67-b6d2-8cb47567df88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:05:53.174156  310831 system_pods.go:61] "kube-proxy-w7d6c" [0fdf51a7-bc5c-4d97-a730-8aff384ac3a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1121 14:05:53.174166  310831 system_pods.go:61] "kube-scheduler-functional-939098" [bff39350-7b83-4931-82f1-333f5fc4afa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:05:53.174172  310831 system_pods.go:61] "storage-provisioner" [dbf5b44f-07ff-4eda-aec4-255e9e4b3762] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:05:53.174177  310831 system_pods.go:74] duration metric: took 4.706525ms to wait for pod list to return data ...
	I1121 14:05:53.174185  310831 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:05:53.184817  310831 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 14:05:53.184839  310831 node_conditions.go:123] node cpu capacity is 2
	I1121 14:05:53.184850  310831 node_conditions.go:105] duration metric: took 10.660826ms to run NodePressure ...
	I1121 14:05:53.184912  310831 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1121 14:05:53.535515  310831 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1121 14:05:53.539207  310831 kubeadm.go:744] kubelet initialised
	I1121 14:05:53.539219  310831 kubeadm.go:745] duration metric: took 3.691343ms waiting for restarted kubelet to initialise ...
	I1121 14:05:53.539233  310831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:05:53.548527  310831 ops.go:34] apiserver oom_adj: -16
	I1121 14:05:53.548540  310831 kubeadm.go:602] duration metric: took 10.793589954s to restartPrimaryControlPlane
	I1121 14:05:53.548547  310831 kubeadm.go:403] duration metric: took 10.842090421s to StartCluster
	I1121 14:05:53.548562  310831 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:05:53.548628  310831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:05:53.549256  310831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:05:53.549464  310831 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:05:53.549720  310831 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:05:53.549753  310831 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:05:53.549806  310831 addons.go:70] Setting storage-provisioner=true in profile "functional-939098"
	I1121 14:05:53.549823  310831 addons.go:239] Setting addon storage-provisioner=true in "functional-939098"
	W1121 14:05:53.549827  310831 addons.go:248] addon storage-provisioner should already be in state true
	I1121 14:05:53.549846  310831 host.go:66] Checking if "functional-939098" exists ...
	I1121 14:05:53.549941  310831 addons.go:70] Setting default-storageclass=true in profile "functional-939098"
	I1121 14:05:53.549957  310831 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-939098"
	I1121 14:05:53.550251  310831 cli_runner.go:164] Run: docker container inspect functional-939098 --format={{.State.Status}}
	I1121 14:05:53.550267  310831 cli_runner.go:164] Run: docker container inspect functional-939098 --format={{.State.Status}}
	I1121 14:05:53.552795  310831 out.go:179] * Verifying Kubernetes components...
	I1121 14:05:53.556006  310831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:05:53.589256  310831 addons.go:239] Setting addon default-storageclass=true in "functional-939098"
	W1121 14:05:53.589267  310831 addons.go:248] addon default-storageclass should already be in state true
	I1121 14:05:53.589289  310831 host.go:66] Checking if "functional-939098" exists ...
	I1121 14:05:53.589784  310831 cli_runner.go:164] Run: docker container inspect functional-939098 --format={{.State.Status}}
	I1121 14:05:53.591304  310831 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:05:53.594546  310831 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:05:53.594557  310831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:05:53.594622  310831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
	I1121 14:05:53.622046  310831 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:05:53.622063  310831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:05:53.622122  310831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
	I1121 14:05:53.649946  310831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
	I1121 14:05:53.672529  310831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
	I1121 14:05:53.776837  310831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:05:53.792038  310831 node_ready.go:35] waiting up to 6m0s for node "functional-939098" to be "Ready" ...
	I1121 14:05:53.795276  310831 node_ready.go:49] node "functional-939098" is "Ready"
	I1121 14:05:53.795292  310831 node_ready.go:38] duration metric: took 3.234481ms for node "functional-939098" to be "Ready" ...
	I1121 14:05:53.795304  310831 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:05:53.795364  310831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:05:53.817549  310831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:05:53.818936  310831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:05:53.821221  310831 api_server.go:72] duration metric: took 271.731516ms to wait for apiserver process to appear ...
	I1121 14:05:53.821234  310831 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:05:53.821250  310831 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 14:05:53.833549  310831 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1121 14:05:53.834861  310831 api_server.go:141] control plane version: v1.34.1
	I1121 14:05:53.834873  310831 api_server.go:131] duration metric: took 13.634527ms to wait for apiserver health ...
	I1121 14:05:53.834891  310831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:05:53.838800  310831 system_pods.go:59] 8 kube-system pods found
	I1121 14:05:53.838832  310831 system_pods.go:61] "coredns-66bc5c9577-z9ns8" [5042f506-7e65-423e-85d5-591810b00b9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:05:53.838839  310831 system_pods.go:61] "etcd-functional-939098" [ead87003-6005-4fd3-9651-96e52be4c390] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:05:53.838844  310831 system_pods.go:61] "kindnet-r6rl2" [5f407632-ca67-4b54-9ca8-332530fa5b4c] Running
	I1121 14:05:53.838852  310831 system_pods.go:61] "kube-apiserver-functional-939098" [fa173df2-7a4e-44f4-8a1c-aa866156b83a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:05:53.838861  310831 system_pods.go:61] "kube-controller-manager-functional-939098" [e9fabac5-1ea4-4d67-b6d2-8cb47567df88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:05:53.838866  310831 system_pods.go:61] "kube-proxy-w7d6c" [0fdf51a7-bc5c-4d97-a730-8aff384ac3a7] Running
	I1121 14:05:53.838874  310831 system_pods.go:61] "kube-scheduler-functional-939098" [bff39350-7b83-4931-82f1-333f5fc4afa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:05:53.838878  310831 system_pods.go:61] "storage-provisioner" [dbf5b44f-07ff-4eda-aec4-255e9e4b3762] Running
	I1121 14:05:53.838885  310831 system_pods.go:74] duration metric: took 3.986593ms to wait for pod list to return data ...
	I1121 14:05:53.838902  310831 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:05:53.842466  310831 default_sa.go:45] found service account: "default"
	I1121 14:05:53.842478  310831 default_sa.go:55] duration metric: took 3.572038ms for default service account to be created ...
	I1121 14:05:53.842487  310831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:05:53.847483  310831 system_pods.go:86] 8 kube-system pods found
	I1121 14:05:53.847517  310831 system_pods.go:89] "coredns-66bc5c9577-z9ns8" [5042f506-7e65-423e-85d5-591810b00b9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:05:53.847526  310831 system_pods.go:89] "etcd-functional-939098" [ead87003-6005-4fd3-9651-96e52be4c390] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:05:53.847531  310831 system_pods.go:89] "kindnet-r6rl2" [5f407632-ca67-4b54-9ca8-332530fa5b4c] Running
	I1121 14:05:53.847537  310831 system_pods.go:89] "kube-apiserver-functional-939098" [fa173df2-7a4e-44f4-8a1c-aa866156b83a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:05:53.847543  310831 system_pods.go:89] "kube-controller-manager-functional-939098" [e9fabac5-1ea4-4d67-b6d2-8cb47567df88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:05:53.847547  310831 system_pods.go:89] "kube-proxy-w7d6c" [0fdf51a7-bc5c-4d97-a730-8aff384ac3a7] Running
	I1121 14:05:53.847552  310831 system_pods.go:89] "kube-scheduler-functional-939098" [bff39350-7b83-4931-82f1-333f5fc4afa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:05:53.847555  310831 system_pods.go:89] "storage-provisioner" [dbf5b44f-07ff-4eda-aec4-255e9e4b3762] Running
	I1121 14:05:53.847562  310831 system_pods.go:126] duration metric: took 5.070141ms to wait for k8s-apps to be running ...
	I1121 14:05:53.847590  310831 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:05:53.847658  310831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:05:54.677032  310831 system_svc.go:56] duration metric: took 829.438204ms WaitForService to wait for kubelet
	I1121 14:05:54.677055  310831 kubeadm.go:587] duration metric: took 1.12756666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:05:54.677084  310831 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:05:54.681575  310831 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 14:05:54.681590  310831 node_conditions.go:123] node cpu capacity is 2
	I1121 14:05:54.681600  310831 node_conditions.go:105] duration metric: took 4.511608ms to run NodePressure ...
	I1121 14:05:54.681610  310831 start.go:242] waiting for startup goroutines ...
	I1121 14:05:54.689727  310831 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:05:54.692656  310831 addons.go:530] duration metric: took 1.142876032s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:05:54.692701  310831 start.go:247] waiting for cluster config update ...
	I1121 14:05:54.692712  310831 start.go:256] writing updated cluster config ...
	I1121 14:05:54.693024  310831 ssh_runner.go:195] Run: rm -f paused
	I1121 14:05:54.696831  310831 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:05:54.701751  310831 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z9ns8" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:05:56.207264  310831 pod_ready.go:94] pod "coredns-66bc5c9577-z9ns8" is "Ready"
	I1121 14:05:56.207278  310831 pod_ready.go:86] duration metric: took 1.505512068s for pod "coredns-66bc5c9577-z9ns8" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:05:56.210063  310831 pod_ready.go:83] waiting for pod "etcd-functional-939098" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 14:05:58.216232  310831 pod_ready.go:104] pod "etcd-functional-939098" is not "Ready", error: <nil>
	W1121 14:06:00.244810  310831 pod_ready.go:104] pod "etcd-functional-939098" is not "Ready", error: <nil>
	I1121 14:06:01.216080  310831 pod_ready.go:94] pod "etcd-functional-939098" is "Ready"
	I1121 14:06:01.216094  310831 pod_ready.go:86] duration metric: took 5.006019369s for pod "etcd-functional-939098" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:06:01.219165  310831 pod_ready.go:83] waiting for pod "kube-apiserver-functional-939098" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:06:02.224867  310831 pod_ready.go:94] pod "kube-apiserver-functional-939098" is "Ready"
	I1121 14:06:02.224881  310831 pod_ready.go:86] duration metric: took 1.005693205s for pod "kube-apiserver-functional-939098" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:06:02.227387  310831 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-939098" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:06:02.232721  310831 pod_ready.go:94] pod "kube-controller-manager-functional-939098" is "Ready"
	I1121 14:06:02.232735  310831 pod_ready.go:86] duration metric: took 5.334679ms for pod "kube-controller-manager-functional-939098" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:06:02.235245  310831 pod_ready.go:83] waiting for pod "kube-proxy-w7d6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:06:02.240914  310831 pod_ready.go:94] pod "kube-proxy-w7d6c" is "Ready"
	I1121 14:06:02.240928  310831 pod_ready.go:86] duration metric: took 5.670776ms for pod "kube-proxy-w7d6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:06:02.414214  310831 pod_ready.go:83] waiting for pod "kube-scheduler-functional-939098" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:06:02.813342  310831 pod_ready.go:94] pod "kube-scheduler-functional-939098" is "Ready"
	I1121 14:06:02.813356  310831 pod_ready.go:86] duration metric: took 399.128983ms for pod "kube-scheduler-functional-939098" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:06:02.813366  310831 pod_ready.go:40] duration metric: took 8.116514666s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:06:02.868087  310831 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 14:06:02.872074  310831 out.go:179] * Done! kubectl is now configured to use "functional-939098" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 14:06:39 functional-939098 crio[3724]: time="2025-11-21T14:06:39.179286907Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-42nx5 Namespace:default ID:01116ba2b4d45d77b11272556eba8b54c5d5dbea39bd7945c59842b9ddf12a5d UID:ef8c30f5-8aba-4499-938e-629fd668e6cb NetNS:/var/run/netns/fa4e307e-d1d0-4d13-921a-1e3f3b441625 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002bfc210}] Aliases:map[]}"
	Nov 21 14:06:39 functional-939098 crio[3724]: time="2025-11-21T14:06:39.179435373Z" level=info msg="Checking pod default_hello-node-75c85bcc94-42nx5 for CNI network kindnet (type=ptp)"
	Nov 21 14:06:39 functional-939098 crio[3724]: time="2025-11-21T14:06:39.183005833Z" level=info msg="Ran pod sandbox 01116ba2b4d45d77b11272556eba8b54c5d5dbea39bd7945c59842b9ddf12a5d with infra container: default/hello-node-75c85bcc94-42nx5/POD" id=5cc484e0-bd89-4bbd-8a3c-60d45c3183cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:06:39 functional-939098 crio[3724]: time="2025-11-21T14:06:39.184621673Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=30d4277d-c1be-495b-a8b8-c498b04b5061 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:06:46 functional-939098 crio[3724]: time="2025-11-21T14:06:46.168882697Z" level=info msg="Stopping pod sandbox: 30233b3525e1579fa27c7a16457d984a268e23de99f47f3da693f210dc7c1c74" id=cea6c3d5-7fbc-45d6-9e16-e5d08e828c8b name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 14:06:46 functional-939098 crio[3724]: time="2025-11-21T14:06:46.168939305Z" level=info msg="Stopped pod sandbox (already stopped): 30233b3525e1579fa27c7a16457d984a268e23de99f47f3da693f210dc7c1c74" id=cea6c3d5-7fbc-45d6-9e16-e5d08e828c8b name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 14:06:46 functional-939098 crio[3724]: time="2025-11-21T14:06:46.169729571Z" level=info msg="Removing pod sandbox: 30233b3525e1579fa27c7a16457d984a268e23de99f47f3da693f210dc7c1c74" id=bdc0815a-6c3e-420f-bad1-3567539c55e1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 14:06:46 functional-939098 crio[3724]: time="2025-11-21T14:06:46.173807469Z" level=info msg="Removed pod sandbox: 30233b3525e1579fa27c7a16457d984a268e23de99f47f3da693f210dc7c1c74" id=bdc0815a-6c3e-420f-bad1-3567539c55e1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 14:06:46 functional-939098 crio[3724]: time="2025-11-21T14:06:46.174519654Z" level=info msg="Stopping pod sandbox: d3cf555437612c996b911ed268667ea73b07a04f401bab1dd330e65c86503b7c" id=fd323cf6-7a84-4c67-bc29-bd2e55dd0a4a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 14:06:46 functional-939098 crio[3724]: time="2025-11-21T14:06:46.174665732Z" level=info msg="Stopped pod sandbox (already stopped): d3cf555437612c996b911ed268667ea73b07a04f401bab1dd330e65c86503b7c" id=fd323cf6-7a84-4c67-bc29-bd2e55dd0a4a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 14:06:46 functional-939098 crio[3724]: time="2025-11-21T14:06:46.175075971Z" level=info msg="Removing pod sandbox: d3cf555437612c996b911ed268667ea73b07a04f401bab1dd330e65c86503b7c" id=b8f3ab6e-83d2-4ffc-a93a-6053f4757baf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 14:06:46 functional-939098 crio[3724]: time="2025-11-21T14:06:46.179032456Z" level=info msg="Removed pod sandbox: d3cf555437612c996b911ed268667ea73b07a04f401bab1dd330e65c86503b7c" id=b8f3ab6e-83d2-4ffc-a93a-6053f4757baf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 14:06:46 functional-939098 crio[3724]: time="2025-11-21T14:06:46.179566119Z" level=info msg="Stopping pod sandbox: b22cdb0d9116e580a54ffba6a70dc4c8208d9a01362858c80662be8529228ea3" id=898e3dbb-6d99-4cfd-84d2-8b1a4dc1ec5b name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 14:06:46 functional-939098 crio[3724]: time="2025-11-21T14:06:46.179689288Z" level=info msg="Stopped pod sandbox (already stopped): b22cdb0d9116e580a54ffba6a70dc4c8208d9a01362858c80662be8529228ea3" id=898e3dbb-6d99-4cfd-84d2-8b1a4dc1ec5b name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 14:06:46 functional-939098 crio[3724]: time="2025-11-21T14:06:46.180115002Z" level=info msg="Removing pod sandbox: b22cdb0d9116e580a54ffba6a70dc4c8208d9a01362858c80662be8529228ea3" id=27ffc92b-840d-4ca6-b419-db77f787a985 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 14:06:46 functional-939098 crio[3724]: time="2025-11-21T14:06:46.184135619Z" level=info msg="Removed pod sandbox: b22cdb0d9116e580a54ffba6a70dc4c8208d9a01362858c80662be8529228ea3" id=27ffc92b-840d-4ca6-b419-db77f787a985 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 14:06:52 functional-939098 crio[3724]: time="2025-11-21T14:06:52.134412395Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cd73fafa-4275-4b2c-9118-31af4ad4ab71 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:07:04 functional-939098 crio[3724]: time="2025-11-21T14:07:04.137737044Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f5e1314c-91e4-4a8a-8fb5-59e3db7ac7cd name=/runtime.v1.ImageService/PullImage
	Nov 21 14:07:20 functional-939098 crio[3724]: time="2025-11-21T14:07:20.134868283Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=de0a0608-fad0-4276-9d90-1aebcebce6f9 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:07:55 functional-939098 crio[3724]: time="2025-11-21T14:07:55.133536032Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e0cb6f9b-26bc-4e07-9414-41a878dfd366 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:08:10 functional-939098 crio[3724]: time="2025-11-21T14:08:10.134743724Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4acc3b6e-9640-41db-8296-185b9b992fd7 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:09:16 functional-939098 crio[3724]: time="2025-11-21T14:09:16.134357016Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=25afeebb-da70-40dd-975a-e545fad0f16b name=/runtime.v1.ImageService/PullImage
	Nov 21 14:09:43 functional-939098 crio[3724]: time="2025-11-21T14:09:43.1343123Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8153aebf-58be-4607-b573-a899e7a89af0 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:12:03 functional-939098 crio[3724]: time="2025-11-21T14:12:03.133535078Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e9d0a5f1-3f61-4bb7-a98c-095aa3b29a0d name=/runtime.v1.ImageService/PullImage
	Nov 21 14:12:33 functional-939098 crio[3724]: time="2025-11-21T14:12:33.13325086Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d6ca887e-565c-4212-acb7-34b846a00b3c name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bf1175b3181de       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712   9 minutes ago       Running             myfrontend                0                   d8cef71958c30       sp-pod                                      default
	e9b52aa1214dc       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   d671a6cd001bf       nginx-svc                                   default
	25b51b8f90e9d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   93ae6223a93ef       kindnet-r6rl2                               kube-system
	b62562d24bc55       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   2d91587367d8f       kube-proxy-w7d6c                            kube-system
	738d89637fea9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       4                   0c3c3f07b862d       storage-provisioner                         kube-system
	166f0ceb62aba       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   3                   e025f184ee1e9       coredns-66bc5c9577-z9ns8                    kube-system
	ec5f1a53df66b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   cb61d4f766f1b       kube-apiserver-functional-939098            kube-system
	d373cf2c1abb1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   f9f6dbd85eaeb       kube-scheduler-functional-939098            kube-system
	e15be56aaeb59       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   479ee73395ab9       kube-controller-manager-functional-939098   kube-system
	fc28356bd8371       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   7988b0df45661       etcd-functional-939098                      kube-system
	caabed5612504       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Exited              storage-provisioner       3                   0c3c3f07b862d       storage-provisioner                         kube-system
	3b3b4dc824663       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   2                   e025f184ee1e9       coredns-66bc5c9577-z9ns8                    kube-system
	8f734ee18dade       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               2                   93ae6223a93ef       kindnet-r6rl2                               kube-system
	c3bade27638cd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   2                   479ee73395ab9       kube-controller-manager-functional-939098   kube-system
	831c757ba7a49       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      2                   7988b0df45661       etcd-functional-939098                      kube-system
	8f2a97a4b73b2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            2                   f9f6dbd85eaeb       kube-scheduler-functional-939098            kube-system
	b1530f666cc78       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                2                   2d91587367d8f       kube-proxy-w7d6c                            kube-system
	
	
	==> coredns [166f0ceb62abab4d71d9b2e8090847130562216843bcfd1c0cc23f4ba040160d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54651 - 30045 "HINFO IN 5486153950206986077.8890214055679875500. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016108403s
	
	
	==> coredns [3b3b4dc824663b18b3ef12853b120588e1884b02009c47d4bbba8c8e6c9b432a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48494 - 55708 "HINFO IN 5633374943358219034.3161971935085720938. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022539886s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-939098
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-939098
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=functional-939098
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_03_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:03:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-939098
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:16:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:16:23 +0000   Fri, 21 Nov 2025 14:03:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:16:23 +0000   Fri, 21 Nov 2025 14:03:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:16:23 +0000   Fri, 21 Nov 2025 14:03:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:16:23 +0000   Fri, 21 Nov 2025 14:04:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-939098
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                557a7e69-2487-4394-9188-eb196ed9b796
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-42nx5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-6q5rv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 coredns-66bc5c9577-z9ns8                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-939098                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-r6rl2                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-939098             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-939098    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-w7d6c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-939098             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-939098 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-939098 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-939098 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-939098 event: Registered Node functional-939098 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-939098 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-939098 event: Registered Node functional-939098 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-939098 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-939098 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-939098 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-939098 event: Registered Node functional-939098 in Controller
	
	
	==> dmesg <==
	[Nov21 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015310] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503949] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032916] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.894651] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.192036] kauditd_printk_skb: 36 callbacks suppressed
	[Nov21 12:49] hrtimer: interrupt took 26907018 ns
	[Nov21 13:55] kauditd_printk_skb: 8 callbacks suppressed
	[Nov21 13:57] overlayfs: idmapped layers are currently not supported
	[  +0.074753] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov21 14:02] overlayfs: idmapped layers are currently not supported
	[Nov21 14:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [831c757ba7a49d3dcac00c638809bf6d28cb3e3a5272cfaf3853ed06f1e9ef4c] <==
	{"level":"warn","ts":"2025-11-21T14:05:02.965018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:03.016683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:03.059074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:03.090576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:03.117321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:03.145610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:03.204886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52942","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:05:26.739397Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-21T14:05:26.739447Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-939098","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-21T14:05:26.739549Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-21T14:05:26.890117Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-21T14:05:26.891575Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T14:05:26.891630Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-21T14:05:26.891700Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-21T14:05:26.891717Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-21T14:05:26.891683Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-21T14:05:26.891789Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-21T14:05:26.891822Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-21T14:05:26.891911Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-21T14:05:26.891936Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-21T14:05:26.891944Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T14:05:26.895639Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-21T14:05:26.895725Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T14:05:26.895759Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-21T14:05:26.895768Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-939098","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [fc28356bd8371a047d562ba13f216afe960b4f4d8570a9985ca440c114583201] <==
	{"level":"warn","ts":"2025-11-21T14:05:49.977712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:49.988326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.008172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.024053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.046029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.063958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.114556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.131951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.149541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.166979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.183932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.200884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.230162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.233852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.251463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.268004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.286086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.309122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.353825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.358081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.375353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:05:50.476605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53706","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:15:48.979130Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1114}
	{"level":"info","ts":"2025-11-21T14:15:49.003194Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1114,"took":"23.781751ms","hash":2935786889,"current-db-size-bytes":3194880,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1363968,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-11-21T14:15:49.003267Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2935786889,"revision":1114,"compact-revision":-1}
	
	
	==> kernel <==
	 14:16:24 up  1:58,  0 user,  load average: 0.30, 0.35, 1.12
	Linux functional-939098 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [25b51b8f90e9dec3edf284e16a2cebe9ec5d4d432865446774e5680dae24c0be] <==
	I1121 14:14:22.901679       1 main.go:301] handling current node
	I1121 14:14:32.901268       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:14:32.901318       1 main.go:301] handling current node
	I1121 14:14:42.901813       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:14:42.901870       1 main.go:301] handling current node
	I1121 14:14:52.901082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:14:52.901193       1 main.go:301] handling current node
	I1121 14:15:02.901445       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:15:02.901480       1 main.go:301] handling current node
	I1121 14:15:12.900988       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:15:12.901025       1 main.go:301] handling current node
	I1121 14:15:22.901670       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:15:22.901705       1 main.go:301] handling current node
	I1121 14:15:32.901391       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:15:32.901426       1 main.go:301] handling current node
	I1121 14:15:42.901800       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:15:42.901838       1 main.go:301] handling current node
	I1121 14:15:52.901010       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:15:52.901060       1 main.go:301] handling current node
	I1121 14:16:02.901861       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:16:02.901898       1 main.go:301] handling current node
	I1121 14:16:12.900920       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:16:12.901045       1 main.go:301] handling current node
	I1121 14:16:22.901828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:16:22.901873       1 main.go:301] handling current node
	
	
	==> kindnet [8f734ee18dadec024283ae9efb7c7b2f35b1a3891e771a44c22856fb2546abf7] <==
	I1121 14:05:00.010063       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:05:00.010557       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1121 14:05:00.146389       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:05:00.146486       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:05:00.146539       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:05:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:05:00.446534       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:05:00.446642       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:05:00.446677       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:05:00.447663       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:05:04.448454       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:05:04.448554       1 metrics.go:72] Registering metrics
	I1121 14:05:04.448642       1 controller.go:711] "Syncing nftables rules"
	I1121 14:05:10.446110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:05:10.446163       1 main.go:301] handling current node
	I1121 14:05:20.447025       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:05:20.447103       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ec5f1a53df66b1ace4e24412c273a40717e05f9499bff8f306a1d72a4523b153] <==
	I1121 14:05:51.337262       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1121 14:05:51.338312       1 aggregator.go:171] initial CRD sync complete...
	I1121 14:05:51.338916       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:05:51.338992       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:05:51.339025       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:05:51.346974       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:05:51.347310       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:05:51.351127       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:05:51.351957       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1121 14:05:51.359434       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 14:05:52.067228       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:05:52.250586       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:05:53.161737       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:05:53.428549       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:05:53.516094       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:05:53.524137       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:05:54.921006       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:05:54.975358       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:05:55.016783       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:06:06.212704       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.189.213"}
	I1121 14:06:12.700539       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.160.60"}
	I1121 14:06:22.394390       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.216.122"}
	E1121 14:06:31.421485       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1121 14:06:38.939161       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.154.82"}
	I1121 14:15:51.266082       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c3bade27638cdc4ff0105dc17e27a8bd698b2c22bbe19953668199e1f8f8596d] <==
	I1121 14:05:07.405794       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 14:05:07.405892       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 14:05:07.405979       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-939098"
	I1121 14:05:07.406041       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1121 14:05:07.406419       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:05:07.406537       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:05:07.406587       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:05:07.407982       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:05:07.408056       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:05:07.410972       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:05:07.411508       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:05:07.413811       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:05:07.415000       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:05:07.415652       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:05:07.417313       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:05:07.417419       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:05:07.420523       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:05:07.420541       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:05:07.420559       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:05:07.423789       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:05:07.425930       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:05:07.427115       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:05:07.428264       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:05:07.456083       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:05:07.459248       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [e15be56aaeb59ba697a0c340b042c6daee393741163e2cb31d9bc89a32b2df80] <==
	I1121 14:05:54.565697       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:05:54.567214       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:05:54.567255       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 14:05:54.572965       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:05:54.583140       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:05:54.583631       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:05:54.583738       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:05:54.588258       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 14:05:54.592994       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:05:54.602791       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:05:54.605772       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:05:54.593852       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1121 14:05:54.644692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:05:54.608757       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:05:54.605883       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:05:54.645265       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:05:54.645273       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:05:54.611778       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:05:54.611816       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:05:54.622973       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:05:54.623011       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:05:54.634185       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:05:54.702923       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:05:54.703014       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:05:54.703032       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [b1530f666cc783647c2b2f17785dd6827fc187baa753819f92c0d14568c3d38a] <==
	I1121 14:04:59.709825       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:05:01.469352       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:05:04.467536       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:05:04.500271       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 14:05:04.517890       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:05:04.940567       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:05:04.940741       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:05:05.059890       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:05:05.060327       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:05:05.060600       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:05:05.062127       1 config.go:200] "Starting service config controller"
	I1121 14:05:05.062188       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:05:05.062227       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:05:05.062254       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:05:05.062299       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:05:05.062326       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:05:05.063002       1 config.go:309] "Starting node config controller"
	I1121 14:05:05.066297       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:05:05.066390       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:05:05.164493       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:05:05.169845       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:05:05.169893       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b62562d24bc559e89f5742caa6e6dbaf00f6ddb7c6e76f8224b964b43a677d91] <==
	I1121 14:05:52.643588       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:05:52.738134       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:05:52.839139       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:05:52.839181       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 14:05:52.839284       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:05:52.872044       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:05:52.872162       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:05:52.876629       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:05:52.876970       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:05:52.877157       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:05:52.878729       1 config.go:200] "Starting service config controller"
	I1121 14:05:52.878899       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:05:52.878945       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:05:52.878971       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:05:52.879005       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:05:52.879030       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:05:52.879741       1 config.go:309] "Starting node config controller"
	I1121 14:05:52.882104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:05:52.882161       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:05:52.979522       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:05:52.979562       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:05:52.979618       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8f2a97a4b73b2a02270921862e38e64c3b78cd8740d3b346e5dd52f05bd381d7] <==
	I1121 14:05:02.007525       1 serving.go:386] Generated self-signed cert in-memory
	I1121 14:05:05.061810       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:05:05.061848       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:05:05.089655       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 14:05:05.089797       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 14:05:05.089864       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:05:05.089896       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:05:05.090130       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:05:05.090167       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:05:05.091331       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:05:05.099830       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:05:05.192710       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:05:05.192776       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 14:05:05.192888       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:05:26.735637       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1121 14:05:26.735739       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1121 14:05:26.735750       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1121 14:05:26.735771       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:05:26.735789       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:05:26.735805       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1121 14:05:26.736054       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1121 14:05:26.736084       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d373cf2c1abb1fa5d4100a9c484e9915807eb6e5417eb8bc0d2434487390d8d2] <==
	I1121 14:05:49.538797       1 serving.go:386] Generated self-signed cert in-memory
	W1121 14:05:51.156354       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 14:05:51.156473       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 14:05:51.156509       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 14:05:51.156551       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 14:05:51.288613       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:05:51.288721       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:05:51.297443       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:05:51.297543       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:05:51.297385       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:05:51.298037       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:05:51.398293       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:13:51 functional-939098 kubelet[4062]: E1121 14:13:51.133215    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	Nov 21 14:13:59 functional-939098 kubelet[4062]: E1121 14:13:59.133169    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6q5rv" podUID="53032433-3188-46f0-be28-a1186d880574"
	Nov 21 14:14:04 functional-939098 kubelet[4062]: E1121 14:14:04.135761    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	Nov 21 14:14:14 functional-939098 kubelet[4062]: E1121 14:14:14.133730    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6q5rv" podUID="53032433-3188-46f0-be28-a1186d880574"
	Nov 21 14:14:17 functional-939098 kubelet[4062]: E1121 14:14:17.132976    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	Nov 21 14:14:28 functional-939098 kubelet[4062]: E1121 14:14:28.134461    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6q5rv" podUID="53032433-3188-46f0-be28-a1186d880574"
	Nov 21 14:14:29 functional-939098 kubelet[4062]: E1121 14:14:29.133698    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	Nov 21 14:14:39 functional-939098 kubelet[4062]: E1121 14:14:39.133243    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6q5rv" podUID="53032433-3188-46f0-be28-a1186d880574"
	Nov 21 14:14:44 functional-939098 kubelet[4062]: E1121 14:14:44.133780    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	Nov 21 14:14:50 functional-939098 kubelet[4062]: E1121 14:14:50.134800    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6q5rv" podUID="53032433-3188-46f0-be28-a1186d880574"
	Nov 21 14:14:56 functional-939098 kubelet[4062]: E1121 14:14:56.134093    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	Nov 21 14:15:05 functional-939098 kubelet[4062]: E1121 14:15:05.133559    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6q5rv" podUID="53032433-3188-46f0-be28-a1186d880574"
	Nov 21 14:15:10 functional-939098 kubelet[4062]: E1121 14:15:10.133608    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	Nov 21 14:15:17 functional-939098 kubelet[4062]: E1121 14:15:17.133277    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6q5rv" podUID="53032433-3188-46f0-be28-a1186d880574"
	Nov 21 14:15:21 functional-939098 kubelet[4062]: E1121 14:15:21.133262    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	Nov 21 14:15:31 functional-939098 kubelet[4062]: E1121 14:15:31.133039    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6q5rv" podUID="53032433-3188-46f0-be28-a1186d880574"
	Nov 21 14:15:33 functional-939098 kubelet[4062]: E1121 14:15:33.133408    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	Nov 21 14:15:42 functional-939098 kubelet[4062]: E1121 14:15:42.134243    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6q5rv" podUID="53032433-3188-46f0-be28-a1186d880574"
	Nov 21 14:15:46 functional-939098 kubelet[4062]: E1121 14:15:46.134331    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	Nov 21 14:15:56 functional-939098 kubelet[4062]: E1121 14:15:56.134491    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6q5rv" podUID="53032433-3188-46f0-be28-a1186d880574"
	Nov 21 14:15:59 functional-939098 kubelet[4062]: E1121 14:15:59.133791    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	Nov 21 14:16:08 functional-939098 kubelet[4062]: E1121 14:16:08.133458    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6q5rv" podUID="53032433-3188-46f0-be28-a1186d880574"
	Nov 21 14:16:12 functional-939098 kubelet[4062]: E1121 14:16:12.133353    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	Nov 21 14:16:21 functional-939098 kubelet[4062]: E1121 14:16:21.133311    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6q5rv" podUID="53032433-3188-46f0-be28-a1186d880574"
	Nov 21 14:16:23 functional-939098 kubelet[4062]: E1121 14:16:23.133020    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-42nx5" podUID="ef8c30f5-8aba-4499-938e-629fd668e6cb"
	
	
	==> storage-provisioner [738d89637fea9da0640ff21b6e28c3f013c154280fe839874f9790936754b247] <==
	W1121 14:15:58.937885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:00.940859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:00.949180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:02.951689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:02.956192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:04.958812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:04.963622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:06.966668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:06.973327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:08.976562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:08.981307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:10.983999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:10.988786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:12.993507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:12.999490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:15.010972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:15.029641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:17.036691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:17.041532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:19.045180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:19.050714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:21.053266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:21.059666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:23.064548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:16:23.071498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [caabed561250488e166ecd9d43923e15db4c0da68ad344dd7f4e26c4c6fe8fde] <==
	I1121 14:05:42.370586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 14:05:42.372257       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-939098 -n functional-939098
helpers_test.go:269: (dbg) Run:  kubectl --context functional-939098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-42nx5 hello-node-connect-7d85dfc575-6q5rv
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-939098 describe pod hello-node-75c85bcc94-42nx5 hello-node-connect-7d85dfc575-6q5rv
helpers_test.go:290: (dbg) kubectl --context functional-939098 describe pod hello-node-75c85bcc94-42nx5 hello-node-connect-7d85dfc575-6q5rv:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-42nx5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-939098/192.168.49.2
	Start Time:       Fri, 21 Nov 2025 14:06:38 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-htz98 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-htz98:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-42nx5 to functional-939098
	  Normal   Pulling    6m42s (x5 over 9m46s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m42s (x5 over 9m46s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m42s (x5 over 9m46s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m36s (x21 over 9m46s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m36s (x21 over 9m46s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-6q5rv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-939098/192.168.49.2
	Start Time:       Fri, 21 Nov 2025 14:06:22 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z74zh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z74zh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6q5rv to functional-939098
	  Normal   Pulling    7m9s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     5m1s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-939098 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-939098 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-42nx5" [ef8c30f5-8aba-4499-938e-629fd668e6cb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1121 14:06:49.466054  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:09:05.599866  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:09:33.307865  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:14:05.599478  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-939098 -n functional-939098
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-21 14:16:39.406173297 +0000 UTC m=+1236.889153668
functional_test.go:1460: (dbg) Run:  kubectl --context functional-939098 describe po hello-node-75c85bcc94-42nx5 -n default
functional_test.go:1460: (dbg) kubectl --context functional-939098 describe po hello-node-75c85bcc94-42nx5 -n default:
Name:             hello-node-75c85bcc94-42nx5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-939098/192.168.49.2
Start Time:       Fri, 21 Nov 2025 14:06:38 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-htz98 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-htz98:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-42nx5 to functional-939098
Normal   Pulling    6m56s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m56s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-939098 logs hello-node-75c85bcc94-42nx5 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-939098 logs hello-node-75c85bcc94-42nx5 -n default: exit status 1 (105.77327ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-42nx5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-939098 logs hello-node-75c85bcc94-42nx5 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 service --namespace=default --https --url hello-node: exit status 115 (524.988948ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32373
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-939098 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 service hello-node --url --format={{.IP}}: exit status 115 (473.182054ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-939098 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 service hello-node --url: exit status 115 (514.115316ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32373
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-939098 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32373
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image load --daemon kicbase/echo-server:functional-939098 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-939098 image load --daemon kicbase/echo-server:functional-939098 --alsologtostderr: (1.436001521s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-939098" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image load --daemon kicbase/echo-server:functional-939098 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-939098" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-939098
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image load --daemon kicbase/echo-server:functional-939098 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-939098" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image save kicbase/echo-server:functional-939098 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1121 14:16:53.380748  318724 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:16:53.380939  318724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:16:53.380954  318724 out.go:374] Setting ErrFile to fd 2...
	I1121 14:16:53.380959  318724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:16:53.381241  318724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:16:53.381915  318724 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:16:53.382075  318724 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:16:53.382587  318724 cli_runner.go:164] Run: docker container inspect functional-939098 --format={{.State.Status}}
	I1121 14:16:53.402648  318724 ssh_runner.go:195] Run: systemctl --version
	I1121 14:16:53.402704  318724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
	I1121 14:16:53.419930  318724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
	I1121 14:16:53.519095  318724 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1121 14:16:53.519157  318724 cache_images.go:255] Failed to load cached images for "functional-939098": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1121 14:16:53.519182  318724 cache_images.go:267] failed pushing to: functional-939098

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-939098
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image save --daemon kicbase/echo-server:functional-939098 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-939098
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-939098: exit status 1 (18.437869ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-939098

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-939098

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.19s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-636314 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-636314 --output=json --user=testUser: exit status 80 (2.190338248s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"04f4b849-d1de-4c8f-9754-3dd88e96d8b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-636314 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"fe121986-1e6a-40d4-8af9-c22b80153f39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-21T14:30:00Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"c4cc0653-c8be-4086-bf70-87b0456eab78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-636314 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.19s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.11s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-636314 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-636314 --output=json --user=testUser: exit status 80 (2.112651624s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"01020f64-8d99-4de4-9bbb-acc9cfbf59a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-636314 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"07b46180-df5a-438d-bf62-d77c1d9eae0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-21T14:30:02Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"b5b855f1-8314-46c9-980b-fdd62ccc4d6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-636314 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.11s)

                                                
                                    
x
+
TestPause/serial/Pause (7.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-706190 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-706190 --alsologtostderr -v=5: exit status 80 (2.223491628s)

                                                
                                                
-- stdout --
	* Pausing node pause-706190 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:53:28.006566  453215 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:53:28.007924  453215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:53:28.007991  453215 out.go:374] Setting ErrFile to fd 2...
	I1121 14:53:28.008019  453215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:53:28.008344  453215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:53:28.008742  453215 out.go:368] Setting JSON to false
	I1121 14:53:28.008811  453215 mustload.go:66] Loading cluster: pause-706190
	I1121 14:53:28.009329  453215 config.go:182] Loaded profile config "pause-706190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:53:28.009858  453215 cli_runner.go:164] Run: docker container inspect pause-706190 --format={{.State.Status}}
	I1121 14:53:28.031687  453215 host.go:66] Checking if "pause-706190" exists ...
	I1121 14:53:28.031995  453215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:53:28.141037  453215 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 14:53:28.130484699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:53:28.141725  453215 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-706190 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1121 14:53:28.146974  453215 out.go:179] * Pausing node pause-706190 ... 
	I1121 14:53:28.149933  453215 host.go:66] Checking if "pause-706190" exists ...
	I1121 14:53:28.150306  453215 ssh_runner.go:195] Run: systemctl --version
	I1121 14:53:28.150356  453215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:53:28.170801  453215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/pause-706190/id_rsa Username:docker}
	I1121 14:53:28.291412  453215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:53:28.310589  453215 pause.go:52] kubelet running: true
	I1121 14:53:28.310687  453215 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:53:28.567940  453215 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:53:28.568038  453215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:53:28.657473  453215 cri.go:89] found id: "7fd1af596bd962578d4345a6db324de4fb359033a06c21a89e10c5562cf0406c"
	I1121 14:53:28.657495  453215 cri.go:89] found id: "8fb4ee525b2491a835969d0c178891f19502424e426454f40c716c5bbbeacfab"
	I1121 14:53:28.657500  453215 cri.go:89] found id: "b4215710d430f809041f5bba4f80d28ce0164af530df92c483f86efe43316256"
	I1121 14:53:28.657503  453215 cri.go:89] found id: "bfb73c516ae0adef4c04d9b39aacb5990ae73b4cfa9b6fd7f5696465b6a4b222"
	I1121 14:53:28.657507  453215 cri.go:89] found id: "fbdd6d6086f23497b1695b35d4843d7323a4ff9df5621baeda893ea64d511a23"
	I1121 14:53:28.657511  453215 cri.go:89] found id: "e9789f6445316da4ede3f914ea50a7bb3356c7e4b2a8fc78aef346401a3881ae"
	I1121 14:53:28.657514  453215 cri.go:89] found id: "0271386e341fa99a2b463a6333f9aca47fc2de7c5cb39e67ebb2fccf8ffa1a5e"
	I1121 14:53:28.657517  453215 cri.go:89] found id: "040f84d32cdc7b868103d9e8b5e9e17971b6d790e17758c33a035160a39e7d02"
	I1121 14:53:28.657520  453215 cri.go:89] found id: "45d1eb07971cca34df63cc22e950e71d40b97c9098663f5e60130d5a971a5bdc"
	I1121 14:53:28.657526  453215 cri.go:89] found id: "8b6e9299e5d66efba1babf3908bb853e3ef2453315bc7a675c28ebfadd857a0b"
	I1121 14:53:28.657529  453215 cri.go:89] found id: "c933d49a5b407943549354e3a9e5fbb544091961370b604289633563c7439472"
	I1121 14:53:28.657537  453215 cri.go:89] found id: "6812ac6759a64275994e5d4179d3b1c59a354178a5c457f989b66dddfd9abce0"
	I1121 14:53:28.657541  453215 cri.go:89] found id: "197f8208783cc8b8d66bcaabe4dafe985f92bc2bb1c5c712bf1bd3332e0271f2"
	I1121 14:53:28.657544  453215 cri.go:89] found id: "850ff406c6df8f8893b6ab5c9796026832713346c5d66cfb49b7adfbe435e36e"
	I1121 14:53:28.657547  453215 cri.go:89] found id: ""
	I1121 14:53:28.657595  453215 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:53:28.669496  453215 retry.go:31] will retry after 342.515093ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:53:28Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:53:29.013144  453215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:53:29.026370  453215 pause.go:52] kubelet running: false
	I1121 14:53:29.026434  453215 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:53:29.164494  453215 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:53:29.164567  453215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:53:29.233458  453215 cri.go:89] found id: "7fd1af596bd962578d4345a6db324de4fb359033a06c21a89e10c5562cf0406c"
	I1121 14:53:29.233481  453215 cri.go:89] found id: "8fb4ee525b2491a835969d0c178891f19502424e426454f40c716c5bbbeacfab"
	I1121 14:53:29.233487  453215 cri.go:89] found id: "b4215710d430f809041f5bba4f80d28ce0164af530df92c483f86efe43316256"
	I1121 14:53:29.233491  453215 cri.go:89] found id: "bfb73c516ae0adef4c04d9b39aacb5990ae73b4cfa9b6fd7f5696465b6a4b222"
	I1121 14:53:29.233495  453215 cri.go:89] found id: "fbdd6d6086f23497b1695b35d4843d7323a4ff9df5621baeda893ea64d511a23"
	I1121 14:53:29.233499  453215 cri.go:89] found id: "e9789f6445316da4ede3f914ea50a7bb3356c7e4b2a8fc78aef346401a3881ae"
	I1121 14:53:29.233502  453215 cri.go:89] found id: "0271386e341fa99a2b463a6333f9aca47fc2de7c5cb39e67ebb2fccf8ffa1a5e"
	I1121 14:53:29.233505  453215 cri.go:89] found id: "040f84d32cdc7b868103d9e8b5e9e17971b6d790e17758c33a035160a39e7d02"
	I1121 14:53:29.233509  453215 cri.go:89] found id: "45d1eb07971cca34df63cc22e950e71d40b97c9098663f5e60130d5a971a5bdc"
	I1121 14:53:29.233516  453215 cri.go:89] found id: "8b6e9299e5d66efba1babf3908bb853e3ef2453315bc7a675c28ebfadd857a0b"
	I1121 14:53:29.233519  453215 cri.go:89] found id: "c933d49a5b407943549354e3a9e5fbb544091961370b604289633563c7439472"
	I1121 14:53:29.233523  453215 cri.go:89] found id: "6812ac6759a64275994e5d4179d3b1c59a354178a5c457f989b66dddfd9abce0"
	I1121 14:53:29.233526  453215 cri.go:89] found id: "197f8208783cc8b8d66bcaabe4dafe985f92bc2bb1c5c712bf1bd3332e0271f2"
	I1121 14:53:29.233529  453215 cri.go:89] found id: "850ff406c6df8f8893b6ab5c9796026832713346c5d66cfb49b7adfbe435e36e"
	I1121 14:53:29.233532  453215 cri.go:89] found id: ""
	I1121 14:53:29.233585  453215 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:53:29.244241  453215 retry.go:31] will retry after 551.582314ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:53:29Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:53:29.795962  453215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:53:29.809124  453215 pause.go:52] kubelet running: false
	I1121 14:53:29.809208  453215 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:53:29.970772  453215 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:53:29.970862  453215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:53:30.115090  453215 cri.go:89] found id: "7fd1af596bd962578d4345a6db324de4fb359033a06c21a89e10c5562cf0406c"
	I1121 14:53:30.115142  453215 cri.go:89] found id: "8fb4ee525b2491a835969d0c178891f19502424e426454f40c716c5bbbeacfab"
	I1121 14:53:30.115148  453215 cri.go:89] found id: "b4215710d430f809041f5bba4f80d28ce0164af530df92c483f86efe43316256"
	I1121 14:53:30.115152  453215 cri.go:89] found id: "bfb73c516ae0adef4c04d9b39aacb5990ae73b4cfa9b6fd7f5696465b6a4b222"
	I1121 14:53:30.115155  453215 cri.go:89] found id: "fbdd6d6086f23497b1695b35d4843d7323a4ff9df5621baeda893ea64d511a23"
	I1121 14:53:30.115159  453215 cri.go:89] found id: "e9789f6445316da4ede3f914ea50a7bb3356c7e4b2a8fc78aef346401a3881ae"
	I1121 14:53:30.115163  453215 cri.go:89] found id: "0271386e341fa99a2b463a6333f9aca47fc2de7c5cb39e67ebb2fccf8ffa1a5e"
	I1121 14:53:30.115166  453215 cri.go:89] found id: "040f84d32cdc7b868103d9e8b5e9e17971b6d790e17758c33a035160a39e7d02"
	I1121 14:53:30.115169  453215 cri.go:89] found id: "45d1eb07971cca34df63cc22e950e71d40b97c9098663f5e60130d5a971a5bdc"
	I1121 14:53:30.115182  453215 cri.go:89] found id: "8b6e9299e5d66efba1babf3908bb853e3ef2453315bc7a675c28ebfadd857a0b"
	I1121 14:53:30.115186  453215 cri.go:89] found id: "c933d49a5b407943549354e3a9e5fbb544091961370b604289633563c7439472"
	I1121 14:53:30.115190  453215 cri.go:89] found id: "6812ac6759a64275994e5d4179d3b1c59a354178a5c457f989b66dddfd9abce0"
	I1121 14:53:30.115192  453215 cri.go:89] found id: "197f8208783cc8b8d66bcaabe4dafe985f92bc2bb1c5c712bf1bd3332e0271f2"
	I1121 14:53:30.115198  453215 cri.go:89] found id: "850ff406c6df8f8893b6ab5c9796026832713346c5d66cfb49b7adfbe435e36e"
	I1121 14:53:30.115201  453215 cri.go:89] found id: ""
	I1121 14:53:30.115257  453215 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:53:30.131883  453215 out.go:203] 
	W1121 14:53:30.135074  453215 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:53:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:53:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:53:30.135103  453215 out.go:285] * 
	* 
	W1121 14:53:30.141258  453215 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:53:30.144217  453215 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-706190 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-706190
helpers_test.go:243: (dbg) docker inspect pause-706190:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5",
	        "Created": "2025-11-21T14:51:39.42172846Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 446929,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:51:39.497727295Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5/hostname",
	        "HostsPath": "/var/lib/docker/containers/825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5/hosts",
	        "LogPath": "/var/lib/docker/containers/825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5/825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5-json.log",
	        "Name": "/pause-706190",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-706190:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-706190",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5",
	                "LowerDir": "/var/lib/docker/overlay2/ae5cba5c9d043c50cfa2963d11dc3a54d992a67e25d2d84684be2f6df851234c-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae5cba5c9d043c50cfa2963d11dc3a54d992a67e25d2d84684be2f6df851234c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae5cba5c9d043c50cfa2963d11dc3a54d992a67e25d2d84684be2f6df851234c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae5cba5c9d043c50cfa2963d11dc3a54d992a67e25d2d84684be2f6df851234c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-706190",
	                "Source": "/var/lib/docker/volumes/pause-706190/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-706190",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-706190",
	                "name.minikube.sigs.k8s.io": "pause-706190",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08f07c3af72b4d6ddd9525e136d39244cda3f6e5ed0f6b22caa8ccfb53a44442",
	            "SandboxKey": "/var/run/docker/netns/08f07c3af72b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-706190": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:88:7b:03:30:f1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26798d30f93b1e0bf415d72857a44bd1fc90420e58b51ac06cac18b61d8f7e46",
	                    "EndpointID": "f9d6051517240099d960b5c391f09cb764f9699dd391246859d6556f80b1ef87",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-706190",
	                        "825f44f6e1cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-706190 -n pause-706190
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-706190 -n pause-706190: exit status 2 (336.941062ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-706190 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-706190 logs -n 25: (1.596251571s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-140266 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:47 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p missing-upgrade-036945 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-036945    │ jenkins │ v1.32.0 │ 21 Nov 25 14:47 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p NoKubernetes-140266 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ delete  │ -p NoKubernetes-140266                                                                                                                   │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p NoKubernetes-140266 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ ssh     │ -p NoKubernetes-140266 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │                     │
	│ stop    │ -p NoKubernetes-140266                                                                                                                   │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p NoKubernetes-140266 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p missing-upgrade-036945 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-036945    │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:49 UTC │
	│ ssh     │ -p NoKubernetes-140266 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │                     │
	│ delete  │ -p NoKubernetes-140266                                                                                                                   │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p kubernetes-upgrade-886613 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-886613 │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:49 UTC │
	│ stop    │ -p kubernetes-upgrade-886613                                                                                                             │ kubernetes-upgrade-886613 │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ start   │ -p kubernetes-upgrade-886613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-886613 │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │                     │
	│ delete  │ -p missing-upgrade-036945                                                                                                                │ missing-upgrade-036945    │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ start   │ -p stopped-upgrade-489557 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-489557    │ jenkins │ v1.32.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ stop    │ stopped-upgrade-489557 stop                                                                                                              │ stopped-upgrade-489557    │ jenkins │ v1.32.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:50 UTC │
	│ start   │ -p stopped-upgrade-489557 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-489557    │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ delete  │ -p stopped-upgrade-489557                                                                                                                │ stopped-upgrade-489557    │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ start   │ -p running-upgrade-913045 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-913045    │ jenkins │ v1.32.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:51 UTC │
	│ start   │ -p running-upgrade-913045 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-913045    │ jenkins │ v1.37.0 │ 21 Nov 25 14:51 UTC │ 21 Nov 25 14:51 UTC │
	│ delete  │ -p running-upgrade-913045                                                                                                                │ running-upgrade-913045    │ jenkins │ v1.37.0 │ 21 Nov 25 14:51 UTC │ 21 Nov 25 14:51 UTC │
	│ start   │ -p pause-706190 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-706190              │ jenkins │ v1.37.0 │ 21 Nov 25 14:51 UTC │ 21 Nov 25 14:52 UTC │
	│ start   │ -p pause-706190 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-706190              │ jenkins │ v1.37.0 │ 21 Nov 25 14:52 UTC │ 21 Nov 25 14:53 UTC │
	│ pause   │ -p pause-706190 --alsologtostderr -v=5                                                                                                   │ pause-706190              │ jenkins │ v1.37.0 │ 21 Nov 25 14:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:52:56
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:52:56.065107  451036 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:52:56.065317  451036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:52:56.065350  451036 out.go:374] Setting ErrFile to fd 2...
	I1121 14:52:56.065371  451036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:52:56.065618  451036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:52:56.065998  451036 out.go:368] Setting JSON to false
	I1121 14:52:56.067158  451036 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9328,"bootTime":1763727448,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 14:52:56.067262  451036 start.go:143] virtualization:  
	I1121 14:52:56.070582  451036 out.go:179] * [pause-706190] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:52:56.074554  451036 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:52:56.074640  451036 notify.go:221] Checking for updates...
	I1121 14:52:56.081382  451036 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:52:56.084437  451036 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:52:56.087421  451036 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 14:52:56.090449  451036 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:52:56.093575  451036 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:52:56.097186  451036 config.go:182] Loaded profile config "pause-706190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:52:56.097905  451036 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:52:56.136595  451036 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:52:56.136781  451036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:52:56.210991  451036 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 14:52:56.201824302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:52:56.211107  451036 docker.go:319] overlay module found
	I1121 14:52:56.214226  451036 out.go:179] * Using the docker driver based on existing profile
	I1121 14:52:56.217086  451036 start.go:309] selected driver: docker
	I1121 14:52:56.217111  451036 start.go:930] validating driver "docker" against &{Name:pause-706190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-706190 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:52:56.217250  451036 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:52:56.217358  451036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:52:56.273999  451036 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 14:52:56.263527792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:52:56.274405  451036 cni.go:84] Creating CNI manager for ""
	I1121 14:52:56.274476  451036 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:52:56.274529  451036 start.go:353] cluster config:
	{Name:pause-706190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-706190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:52:56.277730  451036 out.go:179] * Starting "pause-706190" primary control-plane node in "pause-706190" cluster
	I1121 14:52:56.280580  451036 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:52:56.283509  451036 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:52:56.286363  451036 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:52:56.286414  451036 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 14:52:56.286428  451036 cache.go:65] Caching tarball of preloaded images
	I1121 14:52:56.286440  451036 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:52:56.286512  451036 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 14:52:56.286521  451036 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:52:56.286657  451036 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/config.json ...
	I1121 14:52:56.305912  451036 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:52:56.305935  451036 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:52:56.305952  451036 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:52:56.305976  451036 start.go:360] acquireMachinesLock for pause-706190: {Name:mk7d0e7547f55706e743e0e645a87e32329e26b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:52:56.306044  451036 start.go:364] duration metric: took 40.337µs to acquireMachinesLock for "pause-706190"
	I1121 14:52:56.306067  451036 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:52:56.306077  451036 fix.go:54] fixHost starting: 
	I1121 14:52:56.306330  451036 cli_runner.go:164] Run: docker container inspect pause-706190 --format={{.State.Status}}
	I1121 14:52:56.326051  451036 fix.go:112] recreateIfNeeded on pause-706190: state=Running err=<nil>
	W1121 14:52:56.326097  451036 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:52:54.542810  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:52:54.905519  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:56582->192.168.76.2:8443: read: connection reset by peer
	I1121 14:52:54.905577  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:52:54.905640  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:52:54.938180  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:52:54.938201  435193 cri.go:89] found id: "14ea6f8fd4272c68cf4137c6d5ef40c99cf24d7a773c2dbf35175ede2a6ad591"
	I1121 14:52:54.938206  435193 cri.go:89] found id: ""
	I1121 14:52:54.938214  435193 logs.go:282] 2 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e 14ea6f8fd4272c68cf4137c6d5ef40c99cf24d7a773c2dbf35175ede2a6ad591]
	I1121 14:52:54.938271  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:54.942258  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:54.945992  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:52:54.946061  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:52:54.976641  435193 cri.go:89] found id: ""
	I1121 14:52:54.976664  435193 logs.go:282] 0 containers: []
	W1121 14:52:54.976673  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:52:54.976679  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:52:54.976739  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:52:55.021705  435193 cri.go:89] found id: ""
	I1121 14:52:55.021731  435193 logs.go:282] 0 containers: []
	W1121 14:52:55.021741  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:52:55.021748  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:52:55.021814  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:52:55.052878  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:52:55.052900  435193 cri.go:89] found id: ""
	I1121 14:52:55.052908  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:52:55.052970  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:55.057203  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:52:55.057278  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:52:55.086002  435193 cri.go:89] found id: ""
	I1121 14:52:55.086025  435193 logs.go:282] 0 containers: []
	W1121 14:52:55.086033  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:52:55.086039  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:52:55.086100  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:52:55.113706  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:52:55.113735  435193 cri.go:89] found id: "a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384"
	I1121 14:52:55.113741  435193 cri.go:89] found id: ""
	I1121 14:52:55.113748  435193 logs.go:282] 2 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49 a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384]
	I1121 14:52:55.113806  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:55.117617  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:55.121179  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:52:55.121247  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:52:55.150078  435193 cri.go:89] found id: ""
	I1121 14:52:55.150145  435193 logs.go:282] 0 containers: []
	W1121 14:52:55.150169  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:52:55.150190  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:52:55.150270  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:52:55.178066  435193 cri.go:89] found id: ""
	I1121 14:52:55.178097  435193 logs.go:282] 0 containers: []
	W1121 14:52:55.178106  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:52:55.178121  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:52:55.178136  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:52:55.211356  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:52:55.211386  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:52:55.242691  435193 logs.go:123] Gathering logs for kube-controller-manager [a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384] ...
	I1121 14:52:55.242727  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384"
	I1121 14:52:55.278465  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:52:55.278491  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:52:55.315131  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:52:55.315161  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:52:55.433791  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:52:55.433828  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:52:55.451881  435193 logs.go:123] Gathering logs for kube-apiserver [14ea6f8fd4272c68cf4137c6d5ef40c99cf24d7a773c2dbf35175ede2a6ad591] ...
	I1121 14:52:55.451909  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 14ea6f8fd4272c68cf4137c6d5ef40c99cf24d7a773c2dbf35175ede2a6ad591"
	I1121 14:52:55.490588  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:52:55.490622  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:52:55.551541  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:52:55.551576  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:52:55.623350  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:52:55.623387  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:52:55.707579  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:52:56.329293  451036 out.go:252] * Updating the running docker "pause-706190" container ...
	I1121 14:52:56.329328  451036 machine.go:94] provisionDockerMachine start ...
	I1121 14:52:56.329410  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:52:56.347461  451036 main.go:143] libmachine: Using SSH client type: native
	I1121 14:52:56.347790  451036 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1121 14:52:56.347805  451036 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:52:56.491981  451036 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-706190
	
	I1121 14:52:56.492006  451036 ubuntu.go:182] provisioning hostname "pause-706190"
	I1121 14:52:56.492076  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:52:56.510696  451036 main.go:143] libmachine: Using SSH client type: native
	I1121 14:52:56.511059  451036 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1121 14:52:56.511087  451036 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-706190 && echo "pause-706190" | sudo tee /etc/hostname
	I1121 14:52:56.665950  451036 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-706190
	
	I1121 14:52:56.666094  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:52:56.685575  451036 main.go:143] libmachine: Using SSH client type: native
	I1121 14:52:56.685903  451036 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1121 14:52:56.685925  451036 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-706190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-706190/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-706190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:52:56.832865  451036 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:52:56.832955  451036 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 14:52:56.832997  451036 ubuntu.go:190] setting up certificates
	I1121 14:52:56.833023  451036 provision.go:84] configureAuth start
	I1121 14:52:56.833101  451036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-706190
	I1121 14:52:56.851761  451036 provision.go:143] copyHostCerts
	I1121 14:52:56.851832  451036 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 14:52:56.851847  451036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 14:52:56.851926  451036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 14:52:56.852022  451036 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 14:52:56.852027  451036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 14:52:56.852052  451036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 14:52:56.852106  451036 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 14:52:56.852110  451036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 14:52:56.852132  451036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 14:52:56.852177  451036 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.pause-706190 san=[127.0.0.1 192.168.85.2 localhost minikube pause-706190]
	I1121 14:52:56.988146  451036 provision.go:177] copyRemoteCerts
	I1121 14:52:56.988228  451036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:52:56.988273  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:52:57.011415  451036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/pause-706190/id_rsa Username:docker}
	I1121 14:52:57.112338  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:52:57.130964  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1121 14:52:57.149380  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:52:57.168555  451036 provision.go:87] duration metric: took 335.496183ms to configureAuth
	I1121 14:52:57.168642  451036 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:52:57.168877  451036 config.go:182] Loaded profile config "pause-706190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:52:57.168990  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:52:57.186886  451036 main.go:143] libmachine: Using SSH client type: native
	I1121 14:52:57.187196  451036 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1121 14:52:57.187216  451036 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:52:58.207904  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:52:58.208372  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:52:58.208439  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:52:58.208492  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:52:58.246593  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:52:58.246616  435193 cri.go:89] found id: ""
	I1121 14:52:58.246624  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:52:58.246678  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:58.250153  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:52:58.250238  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:52:58.276697  435193 cri.go:89] found id: ""
	I1121 14:52:58.276720  435193 logs.go:282] 0 containers: []
	W1121 14:52:58.276728  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:52:58.276735  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:52:58.276791  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:52:58.304879  435193 cri.go:89] found id: ""
	I1121 14:52:58.304901  435193 logs.go:282] 0 containers: []
	W1121 14:52:58.304911  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:52:58.304917  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:52:58.304974  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:52:58.329819  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:52:58.329841  435193 cri.go:89] found id: ""
	I1121 14:52:58.329850  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:52:58.329930  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:58.333591  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:52:58.333662  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:52:58.359694  435193 cri.go:89] found id: ""
	I1121 14:52:58.359718  435193 logs.go:282] 0 containers: []
	W1121 14:52:58.359727  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:52:58.359733  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:52:58.359796  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:52:58.386232  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:52:58.386253  435193 cri.go:89] found id: "a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384"
	I1121 14:52:58.386258  435193 cri.go:89] found id: ""
	I1121 14:52:58.386265  435193 logs.go:282] 2 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49 a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384]
	I1121 14:52:58.386321  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:58.390076  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:58.393565  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:52:58.393640  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:52:58.420067  435193 cri.go:89] found id: ""
	I1121 14:52:58.420092  435193 logs.go:282] 0 containers: []
	W1121 14:52:58.420101  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:52:58.420107  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:52:58.420166  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:52:58.447563  435193 cri.go:89] found id: ""
	I1121 14:52:58.447590  435193 logs.go:282] 0 containers: []
	W1121 14:52:58.447599  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:52:58.447615  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:52:58.447630  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:52:58.565131  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:52:58.565170  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:52:58.629023  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:52:58.629058  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:52:58.694716  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:52:58.694759  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:52:58.726152  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:52:58.726182  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:52:58.742023  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:52:58.742053  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:52:58.811745  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:52:58.811776  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:52:58.811791  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:52:58.845213  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:52:58.845247  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:52:58.871072  435193 logs.go:123] Gathering logs for kube-controller-manager [a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384] ...
	I1121 14:52:58.871100  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384"
	I1121 14:53:01.398720  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:01.399403  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:01.399455  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:01.399515  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:01.428361  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:01.428406  435193 cri.go:89] found id: ""
	I1121 14:53:01.428415  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:01.428478  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:01.432298  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:01.432448  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:01.460955  435193 cri.go:89] found id: ""
	I1121 14:53:01.460982  435193 logs.go:282] 0 containers: []
	W1121 14:53:01.460993  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:01.461000  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:01.461068  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:01.487688  435193 cri.go:89] found id: ""
	I1121 14:53:01.487757  435193 logs.go:282] 0 containers: []
	W1121 14:53:01.487780  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:01.487799  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:01.487887  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:01.515382  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:01.515449  435193 cri.go:89] found id: ""
	I1121 14:53:01.515471  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:01.515558  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:01.519532  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:01.519647  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:01.551715  435193 cri.go:89] found id: ""
	I1121 14:53:01.551783  435193 logs.go:282] 0 containers: []
	W1121 14:53:01.551805  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:01.551826  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:01.551918  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:01.579905  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:01.579972  435193 cri.go:89] found id: "a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384"
	I1121 14:53:01.579990  435193 cri.go:89] found id: ""
	I1121 14:53:01.580012  435193 logs.go:282] 2 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49 a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384]
	I1121 14:53:01.580097  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:01.583973  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:01.587512  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:01.587592  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:01.618168  435193 cri.go:89] found id: ""
	I1121 14:53:01.618245  435193 logs.go:282] 0 containers: []
	W1121 14:53:01.618261  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:01.618268  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:01.618328  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:01.644068  435193 cri.go:89] found id: ""
	I1121 14:53:01.644092  435193 logs.go:282] 0 containers: []
	W1121 14:53:01.644101  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:01.644114  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:01.644125  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:01.703328  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:01.703369  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:01.733878  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:01.733967  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:01.759140  435193 logs.go:123] Gathering logs for kube-controller-manager [a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384] ...
	I1121 14:53:01.759214  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384"
	I1121 14:53:01.785218  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:01.785304  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:01.907393  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:01.907429  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:01.924839  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:01.924877  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:01.996481  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:01.996502  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:01.996516  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:02.036943  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:02.036978  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:02.569573  451036 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:53:02.569595  451036 machine.go:97] duration metric: took 6.240258187s to provisionDockerMachine
	I1121 14:53:02.569606  451036 start.go:293] postStartSetup for "pause-706190" (driver="docker")
	I1121 14:53:02.569617  451036 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:53:02.569687  451036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:53:02.569735  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:53:02.588916  451036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/pause-706190/id_rsa Username:docker}
	I1121 14:53:02.696467  451036 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:53:02.699771  451036 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:53:02.699798  451036 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:53:02.699813  451036 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 14:53:02.699870  451036 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 14:53:02.699951  451036 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 14:53:02.700063  451036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:53:02.707412  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:53:02.725035  451036 start.go:296] duration metric: took 155.41291ms for postStartSetup
	I1121 14:53:02.725115  451036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:53:02.725156  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:53:02.741748  451036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/pause-706190/id_rsa Username:docker}
	I1121 14:53:02.838133  451036 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:53:02.843244  451036 fix.go:56] duration metric: took 6.537158791s for fixHost
	I1121 14:53:02.843273  451036 start.go:83] releasing machines lock for "pause-706190", held for 6.537217475s
	I1121 14:53:02.843351  451036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-706190
	I1121 14:53:02.860298  451036 ssh_runner.go:195] Run: cat /version.json
	I1121 14:53:02.860355  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:53:02.860672  451036 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:53:02.860740  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:53:02.886984  451036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/pause-706190/id_rsa Username:docker}
	I1121 14:53:02.889216  451036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/pause-706190/id_rsa Username:docker}
	I1121 14:53:02.992272  451036 ssh_runner.go:195] Run: systemctl --version
	I1121 14:53:03.086781  451036 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:53:03.131808  451036 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:53:03.136338  451036 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:53:03.136445  451036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:53:03.144768  451036 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:53:03.144801  451036 start.go:496] detecting cgroup driver to use...
	I1121 14:53:03.144835  451036 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:53:03.144886  451036 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:53:03.161349  451036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:53:03.175044  451036 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:53:03.175134  451036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:53:03.191627  451036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:53:03.205556  451036 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:53:03.353521  451036 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:53:03.493693  451036 docker.go:234] disabling docker service ...
	I1121 14:53:03.493855  451036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:53:03.509649  451036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:53:03.524224  451036 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:53:03.662486  451036 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:53:03.802937  451036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:53:03.816342  451036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:53:03.832467  451036 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:53:03.832579  451036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.842840  451036 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 14:53:03.842911  451036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.852897  451036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.862619  451036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.872550  451036 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:53:03.886511  451036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.896662  451036 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.906291  451036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.916629  451036 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:53:03.925730  451036 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:53:03.933711  451036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:53:04.069410  451036 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:53:04.303469  451036 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:53:04.303552  451036 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:53:04.307510  451036 start.go:564] Will wait 60s for crictl version
	I1121 14:53:04.307573  451036 ssh_runner.go:195] Run: which crictl
	I1121 14:53:04.311217  451036 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:53:04.334888  451036 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:53:04.334983  451036 ssh_runner.go:195] Run: crio --version
	I1121 14:53:04.366743  451036 ssh_runner.go:195] Run: crio --version
	I1121 14:53:04.399084  451036 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:53:04.402039  451036 cli_runner.go:164] Run: docker network inspect pause-706190 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:53:04.418007  451036 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:53:04.421985  451036 kubeadm.go:884] updating cluster {Name:pause-706190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-706190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:53:04.422144  451036 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:53:04.422202  451036 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:53:04.454058  451036 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:53:04.454080  451036 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:53:04.454146  451036 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:53:04.479696  451036 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:53:04.479772  451036 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:53:04.479802  451036 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:53:04.479933  451036 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-706190 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-706190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:53:04.480034  451036 ssh_runner.go:195] Run: crio config
	I1121 14:53:04.547478  451036 cni.go:84] Creating CNI manager for ""
	I1121 14:53:04.547547  451036 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:53:04.547584  451036 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:53:04.547634  451036 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-706190 NodeName:pause-706190 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:53:04.547799  451036 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-706190"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:53:04.548108  451036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:53:04.557562  451036 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:53:04.557715  451036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:53:04.565568  451036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1121 14:53:04.578730  451036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:53:04.591810  451036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1121 14:53:04.606286  451036 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:53:04.611154  451036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:53:04.792016  451036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:53:04.808585  451036 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190 for IP: 192.168.85.2
	I1121 14:53:04.808624  451036 certs.go:195] generating shared ca certs ...
	I1121 14:53:04.808642  451036 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:53:04.808821  451036 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 14:53:04.808889  451036 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 14:53:04.808912  451036 certs.go:257] generating profile certs ...
	I1121 14:53:04.809033  451036 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/client.key
	I1121 14:53:04.809124  451036 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/apiserver.key.65068416
	I1121 14:53:04.809192  451036 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/proxy-client.key
	I1121 14:53:04.809323  451036 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 14:53:04.809380  451036 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 14:53:04.809398  451036 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:53:04.809448  451036 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:53:04.809494  451036 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:53:04.809529  451036 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 14:53:04.809593  451036 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:53:04.810246  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:53:04.838311  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:53:04.861904  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:53:04.884071  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:53:04.905039  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 14:53:04.929395  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:53:04.950168  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:53:04.970400  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:53:04.991020  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:53:05.014739  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 14:53:05.038793  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 14:53:05.069504  451036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:53:05.085711  451036 ssh_runner.go:195] Run: openssl version
	I1121 14:53:05.093659  451036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:53:05.106241  451036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:53:05.112009  451036 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:53:05.112106  451036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:53:05.159990  451036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:53:05.172168  451036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 14:53:05.185530  451036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 14:53:05.190147  451036 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 14:53:05.190243  451036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 14:53:05.236823  451036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 14:53:05.246176  451036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 14:53:05.255628  451036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 14:53:05.259834  451036 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 14:53:05.259927  451036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 14:53:05.306640  451036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:53:05.320555  451036 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:53:05.325523  451036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:53:05.372212  451036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:53:05.414483  451036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:53:05.458511  451036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:53:05.500637  451036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:53:05.542403  451036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:53:05.594088  451036 kubeadm.go:401] StartCluster: {Name:pause-706190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-706190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:53:05.594205  451036 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:53:05.594270  451036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:53:05.624133  451036 cri.go:89] found id: "040f84d32cdc7b868103d9e8b5e9e17971b6d790e17758c33a035160a39e7d02"
	I1121 14:53:05.624153  451036 cri.go:89] found id: "45d1eb07971cca34df63cc22e950e71d40b97c9098663f5e60130d5a971a5bdc"
	I1121 14:53:05.624158  451036 cri.go:89] found id: "8b6e9299e5d66efba1babf3908bb853e3ef2453315bc7a675c28ebfadd857a0b"
	I1121 14:53:05.624161  451036 cri.go:89] found id: "c933d49a5b407943549354e3a9e5fbb544091961370b604289633563c7439472"
	I1121 14:53:05.624164  451036 cri.go:89] found id: "6812ac6759a64275994e5d4179d3b1c59a354178a5c457f989b66dddfd9abce0"
	I1121 14:53:05.624167  451036 cri.go:89] found id: "197f8208783cc8b8d66bcaabe4dafe985f92bc2bb1c5c712bf1bd3332e0271f2"
	I1121 14:53:05.624171  451036 cri.go:89] found id: "850ff406c6df8f8893b6ab5c9796026832713346c5d66cfb49b7adfbe435e36e"
	I1121 14:53:05.624174  451036 cri.go:89] found id: ""
	I1121 14:53:05.624227  451036 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:53:05.635247  451036 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:53:05Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:53:05.635329  451036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:53:05.643802  451036 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:53:05.643822  451036 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:53:05.643874  451036 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:53:05.651557  451036 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:53:05.652185  451036 kubeconfig.go:125] found "pause-706190" server: "https://192.168.85.2:8443"
	I1121 14:53:05.653094  451036 kapi.go:59] client config for pause-706190: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/client.crt", KeyFile:"/home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/client.key", CAFile:"/home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21278a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1121 14:53:05.653588  451036 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1121 14:53:05.653607  451036 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1121 14:53:05.653613  451036 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1121 14:53:05.653618  451036 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1121 14:53:05.653630  451036 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1121 14:53:05.653924  451036 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:53:05.661764  451036 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 14:53:05.661836  451036 kubeadm.go:602] duration metric: took 18.00741ms to restartPrimaryControlPlane
	I1121 14:53:05.661853  451036 kubeadm.go:403] duration metric: took 67.776017ms to StartCluster
	I1121 14:53:05.661870  451036 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:53:05.661935  451036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:53:05.662806  451036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:53:05.663031  451036 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:53:05.663384  451036 config.go:182] Loaded profile config "pause-706190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:53:05.663447  451036 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:53:05.669390  451036 out.go:179] * Verifying Kubernetes components...
	I1121 14:53:05.669398  451036 out.go:179] * Enabled addons: 
	I1121 14:53:05.672142  451036 addons.go:530] duration metric: took 8.691492ms for enable addons: enabled=[]
	I1121 14:53:05.672175  451036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:53:05.798866  451036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:53:05.812140  451036 node_ready.go:35] waiting up to 6m0s for node "pause-706190" to be "Ready" ...
	I1121 14:53:04.600492  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:04.600923  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:04.600966  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:04.601024  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:04.632630  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:04.632651  435193 cri.go:89] found id: ""
	I1121 14:53:04.632659  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:04.632718  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:04.636298  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:04.636369  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:04.695503  435193 cri.go:89] found id: ""
	I1121 14:53:04.695524  435193 logs.go:282] 0 containers: []
	W1121 14:53:04.695532  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:04.695538  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:04.695598  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:04.732507  435193 cri.go:89] found id: ""
	I1121 14:53:04.732527  435193 logs.go:282] 0 containers: []
	W1121 14:53:04.732536  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:04.732542  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:04.732599  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:04.760115  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:04.760133  435193 cri.go:89] found id: ""
	I1121 14:53:04.760141  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:04.760206  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:04.764288  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:04.764356  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:04.791875  435193 cri.go:89] found id: ""
	I1121 14:53:04.791962  435193 logs.go:282] 0 containers: []
	W1121 14:53:04.791992  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:04.792015  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:04.792104  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:04.828461  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:04.828480  435193 cri.go:89] found id: ""
	I1121 14:53:04.828487  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:04.828543  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:04.833440  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:04.833509  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:04.867319  435193 cri.go:89] found id: ""
	I1121 14:53:04.867346  435193 logs.go:282] 0 containers: []
	W1121 14:53:04.867355  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:04.867362  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:04.867422  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:04.899290  435193 cri.go:89] found id: ""
	I1121 14:53:04.899396  435193 logs.go:282] 0 containers: []
	W1121 14:53:04.899421  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:04.899442  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:04.899475  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:04.919229  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:04.919308  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:05.014423  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:05.014492  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:05.014525  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:05.061532  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:05.061606  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:05.152663  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:05.152744  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:05.193683  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:05.193710  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:05.280177  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:05.280214  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:05.335094  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:05.335127  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:07.966434  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:07.966905  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:07.966955  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:07.967030  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:07.994721  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:07.994745  435193 cri.go:89] found id: ""
	I1121 14:53:07.994753  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:07.994811  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:07.998572  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:07.998644  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:08.031363  435193 cri.go:89] found id: ""
	I1121 14:53:08.031386  435193 logs.go:282] 0 containers: []
	W1121 14:53:08.031395  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:08.031403  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:08.031472  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:08.058670  435193 cri.go:89] found id: ""
	I1121 14:53:08.058695  435193 logs.go:282] 0 containers: []
	W1121 14:53:08.058705  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:08.058712  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:08.058772  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:08.088766  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:08.088789  435193 cri.go:89] found id: ""
	I1121 14:53:08.088797  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:08.088864  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:08.092703  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:08.092830  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:08.118572  435193 cri.go:89] found id: ""
	I1121 14:53:08.118597  435193 logs.go:282] 0 containers: []
	W1121 14:53:08.118607  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:08.118614  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:08.118673  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1121 14:53:07.813403  451036 node_ready.go:55] error getting node "pause-706190" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/pause-706190": dial tcp 192.168.85.2:8443: connect: connection refused
	I1121 14:53:08.146035  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:08.146059  435193 cri.go:89] found id: ""
	I1121 14:53:08.146068  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:08.146143  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:08.149893  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:08.149989  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:08.175887  435193 cri.go:89] found id: ""
	I1121 14:53:08.175912  435193 logs.go:282] 0 containers: []
	W1121 14:53:08.175920  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:08.175928  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:08.176004  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:08.232195  435193 cri.go:89] found id: ""
	I1121 14:53:08.232217  435193 logs.go:282] 0 containers: []
	W1121 14:53:08.232226  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:08.232251  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:08.232270  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:08.318216  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:08.318253  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:08.375270  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:08.375346  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:08.541483  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:08.541571  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:08.567395  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:08.567484  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:08.688557  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:08.688640  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:08.688670  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:08.750593  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:08.750675  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:08.853040  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:08.853122  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:11.396476  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:11.396857  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:11.396896  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:11.396949  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:11.445625  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:11.445692  435193 cri.go:89] found id: ""
	I1121 14:53:11.445713  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:11.445805  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:11.453071  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:11.453194  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:11.494810  435193 cri.go:89] found id: ""
	I1121 14:53:11.494884  435193 logs.go:282] 0 containers: []
	W1121 14:53:11.494912  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:11.494931  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:11.495036  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:11.540367  435193 cri.go:89] found id: ""
	I1121 14:53:11.540468  435193 logs.go:282] 0 containers: []
	W1121 14:53:11.540491  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:11.540513  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:11.540600  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:11.583922  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:11.583992  435193 cri.go:89] found id: ""
	I1121 14:53:11.584014  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:11.584104  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:11.588039  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:11.588175  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:11.640515  435193 cri.go:89] found id: ""
	I1121 14:53:11.640589  435193 logs.go:282] 0 containers: []
	W1121 14:53:11.640612  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:11.640642  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:11.640747  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:11.684219  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:11.684354  435193 cri.go:89] found id: ""
	I1121 14:53:11.684376  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:11.684494  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:11.688420  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:11.688550  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:11.740797  435193 cri.go:89] found id: ""
	I1121 14:53:11.740871  435193 logs.go:282] 0 containers: []
	W1121 14:53:11.740895  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:11.740913  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:11.741001  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:11.778725  435193 cri.go:89] found id: ""
	I1121 14:53:11.778810  435193 logs.go:282] 0 containers: []
	W1121 14:53:11.778839  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:11.778873  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:11.778902  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:11.831425  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:11.831502  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:11.933138  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:11.933227  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:11.987840  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:11.987866  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:12.135716  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:12.135798  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:12.170385  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:12.170411  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:12.276308  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:12.276392  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:12.276425  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:12.325150  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:12.325226  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:12.916493  451036 node_ready.go:49] node "pause-706190" is "Ready"
	I1121 14:53:12.916519  451036 node_ready.go:38] duration metric: took 7.104338332s for node "pause-706190" to be "Ready" ...
	I1121 14:53:12.916532  451036 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:53:12.916591  451036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:53:12.938782  451036 api_server.go:72] duration metric: took 7.275713505s to wait for apiserver process to appear ...
	I1121 14:53:12.938805  451036 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:53:12.938825  451036 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:53:13.012530  451036 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:53:13.012613  451036 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:53:13.438934  451036 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:53:13.447178  451036 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:53:13.447207  451036 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:53:13.939856  451036 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:53:13.948281  451036 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:53:13.949486  451036 api_server.go:141] control plane version: v1.34.1
	I1121 14:53:13.949514  451036 api_server.go:131] duration metric: took 1.010701403s to wait for apiserver health ...
	I1121 14:53:13.949523  451036 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:53:13.953094  451036 system_pods.go:59] 7 kube-system pods found
	I1121 14:53:13.953137  451036 system_pods.go:61] "coredns-66bc5c9577-gv42v" [32e6ea19-296d-433e-ab3e-7e992350c3c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:53:13.953147  451036 system_pods.go:61] "etcd-pause-706190" [6c5b228e-a0df-44af-a192-d6e4c28b067d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:53:13.953153  451036 system_pods.go:61] "kindnet-w45qn" [82b593e6-c11c-40d5-b942-033d29c7abd1] Running
	I1121 14:53:13.953160  451036 system_pods.go:61] "kube-apiserver-pause-706190" [3c8bd477-767c-40be-8fa7-b5edcf70b139] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:53:13.953172  451036 system_pods.go:61] "kube-controller-manager-pause-706190" [df7e94a8-f416-4abf-94d8-da9d0ff7efd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:53:13.953179  451036 system_pods.go:61] "kube-proxy-hzbpc" [1276e562-5617-4a13-af4d-f386a07e45d1] Running
	I1121 14:53:13.953186  451036 system_pods.go:61] "kube-scheduler-pause-706190" [ae855e82-1749-4071-b918-3df98ae0229d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:53:13.953200  451036 system_pods.go:74] duration metric: took 3.67011ms to wait for pod list to return data ...
	I1121 14:53:13.953210  451036 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:53:13.956170  451036 default_sa.go:45] found service account: "default"
	I1121 14:53:13.956196  451036 default_sa.go:55] duration metric: took 2.977548ms for default service account to be created ...
	I1121 14:53:13.956205  451036 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:53:13.959140  451036 system_pods.go:86] 7 kube-system pods found
	I1121 14:53:13.959182  451036 system_pods.go:89] "coredns-66bc5c9577-gv42v" [32e6ea19-296d-433e-ab3e-7e992350c3c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:53:13.959191  451036 system_pods.go:89] "etcd-pause-706190" [6c5b228e-a0df-44af-a192-d6e4c28b067d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:53:13.959196  451036 system_pods.go:89] "kindnet-w45qn" [82b593e6-c11c-40d5-b942-033d29c7abd1] Running
	I1121 14:53:13.959203  451036 system_pods.go:89] "kube-apiserver-pause-706190" [3c8bd477-767c-40be-8fa7-b5edcf70b139] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:53:13.959214  451036 system_pods.go:89] "kube-controller-manager-pause-706190" [df7e94a8-f416-4abf-94d8-da9d0ff7efd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:53:13.959221  451036 system_pods.go:89] "kube-proxy-hzbpc" [1276e562-5617-4a13-af4d-f386a07e45d1] Running
	I1121 14:53:13.959228  451036 system_pods.go:89] "kube-scheduler-pause-706190" [ae855e82-1749-4071-b918-3df98ae0229d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:53:13.959244  451036 system_pods.go:126] duration metric: took 3.032244ms to wait for k8s-apps to be running ...
	I1121 14:53:13.959253  451036 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:53:13.959313  451036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:53:13.972839  451036 system_svc.go:56] duration metric: took 13.574935ms WaitForService to wait for kubelet
	I1121 14:53:13.972868  451036 kubeadm.go:587] duration metric: took 8.30980582s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:53:13.972888  451036 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:53:13.975685  451036 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 14:53:13.975720  451036 node_conditions.go:123] node cpu capacity is 2
	I1121 14:53:13.975735  451036 node_conditions.go:105] duration metric: took 2.842088ms to run NodePressure ...
	I1121 14:53:13.975749  451036 start.go:242] waiting for startup goroutines ...
	I1121 14:53:13.975757  451036 start.go:247] waiting for cluster config update ...
	I1121 14:53:13.975768  451036 start.go:256] writing updated cluster config ...
	I1121 14:53:13.976107  451036 ssh_runner.go:195] Run: rm -f paused
	I1121 14:53:13.979743  451036 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:53:13.980515  451036 kapi.go:59] client config for pause-706190: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/client.crt", KeyFile:"/home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/client.key", CAFile:"/home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21278a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1121 14:53:13.983697  451036 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gv42v" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 14:53:15.989586  451036 pod_ready.go:104] pod "coredns-66bc5c9577-gv42v" is not "Ready", error: <nil>
	I1121 14:53:14.909821  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:14.910238  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:14.910283  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:14.910340  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:14.951414  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:14.951433  435193 cri.go:89] found id: ""
	I1121 14:53:14.951440  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:14.951495  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:14.955744  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:14.955812  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:14.993314  435193 cri.go:89] found id: ""
	I1121 14:53:14.993337  435193 logs.go:282] 0 containers: []
	W1121 14:53:14.993345  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:14.993352  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:14.993413  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:15.080525  435193 cri.go:89] found id: ""
	I1121 14:53:15.080549  435193 logs.go:282] 0 containers: []
	W1121 14:53:15.080557  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:15.080564  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:15.080627  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:15.112800  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:15.112824  435193 cri.go:89] found id: ""
	I1121 14:53:15.112833  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:15.112897  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:15.117349  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:15.117427  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:15.145759  435193 cri.go:89] found id: ""
	I1121 14:53:15.145788  435193 logs.go:282] 0 containers: []
	W1121 14:53:15.145797  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:15.145804  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:15.145864  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:15.177343  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:15.177368  435193 cri.go:89] found id: ""
	I1121 14:53:15.177376  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:15.177433  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:15.181530  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:15.181621  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:15.212067  435193 cri.go:89] found id: ""
	I1121 14:53:15.212089  435193 logs.go:282] 0 containers: []
	W1121 14:53:15.212097  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:15.212104  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:15.212166  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:15.246935  435193 cri.go:89] found id: ""
	I1121 14:53:15.246957  435193 logs.go:282] 0 containers: []
	W1121 14:53:15.246966  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:15.246977  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:15.246988  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:15.314616  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:15.314635  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:15.314653  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:15.360797  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:15.360870  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:15.427451  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:15.427489  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:15.464695  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:15.464729  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:15.541633  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:15.541757  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:15.582082  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:15.582194  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:15.709452  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:15.709531  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1121 14:53:17.989789  451036 pod_ready.go:104] pod "coredns-66bc5c9577-gv42v" is not "Ready", error: <nil>
	W1121 14:53:20.489484  451036 pod_ready.go:104] pod "coredns-66bc5c9577-gv42v" is not "Ready", error: <nil>
	I1121 14:53:20.988876  451036 pod_ready.go:94] pod "coredns-66bc5c9577-gv42v" is "Ready"
	I1121 14:53:20.988902  451036 pod_ready.go:86] duration metric: took 7.005176807s for pod "coredns-66bc5c9577-gv42v" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:20.991817  451036 pod_ready.go:83] waiting for pod "etcd-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:18.229218  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:18.229649  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:18.229693  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:18.229750  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:18.260502  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:18.260529  435193 cri.go:89] found id: ""
	I1121 14:53:18.260537  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:18.260604  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:18.264549  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:18.264627  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:18.290291  435193 cri.go:89] found id: ""
	I1121 14:53:18.290316  435193 logs.go:282] 0 containers: []
	W1121 14:53:18.290325  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:18.290332  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:18.290402  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:18.318155  435193 cri.go:89] found id: ""
	I1121 14:53:18.318178  435193 logs.go:282] 0 containers: []
	W1121 14:53:18.318187  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:18.318193  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:18.318262  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:18.345309  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:18.345331  435193 cri.go:89] found id: ""
	I1121 14:53:18.345340  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:18.345415  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:18.349188  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:18.349281  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:18.373964  435193 cri.go:89] found id: ""
	I1121 14:53:18.373988  435193 logs.go:282] 0 containers: []
	W1121 14:53:18.373996  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:18.374003  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:18.374060  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:18.400254  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:18.400277  435193 cri.go:89] found id: ""
	I1121 14:53:18.400286  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:18.400344  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:18.403994  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:18.404068  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:18.429936  435193 cri.go:89] found id: ""
	I1121 14:53:18.429961  435193 logs.go:282] 0 containers: []
	W1121 14:53:18.429970  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:18.429976  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:18.430038  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:18.456664  435193 cri.go:89] found id: ""
	I1121 14:53:18.456685  435193 logs.go:282] 0 containers: []
	W1121 14:53:18.456694  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:18.456702  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:18.456720  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:18.519237  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:18.519272  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:18.545007  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:18.545035  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:18.610627  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:18.610664  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:18.646322  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:18.646399  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:18.761407  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:18.761445  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:18.777947  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:18.777978  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:18.850910  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:18.850929  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:18.850943  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:21.384840  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:21.385277  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:21.385332  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:21.385388  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:21.414187  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:21.414211  435193 cri.go:89] found id: ""
	I1121 14:53:21.414219  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:21.414276  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:21.418222  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:21.418291  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:21.448017  435193 cri.go:89] found id: ""
	I1121 14:53:21.448042  435193 logs.go:282] 0 containers: []
	W1121 14:53:21.448051  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:21.448057  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:21.448121  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:21.476317  435193 cri.go:89] found id: ""
	I1121 14:53:21.476341  435193 logs.go:282] 0 containers: []
	W1121 14:53:21.476359  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:21.476365  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:21.476469  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:21.512058  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:21.512079  435193 cri.go:89] found id: ""
	I1121 14:53:21.512087  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:21.512142  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:21.516019  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:21.516090  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:21.546934  435193 cri.go:89] found id: ""
	I1121 14:53:21.546958  435193 logs.go:282] 0 containers: []
	W1121 14:53:21.546967  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:21.546974  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:21.547034  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:21.582288  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:21.582310  435193 cri.go:89] found id: ""
	I1121 14:53:21.582318  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:21.582375  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:21.587252  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:21.587345  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:21.619674  435193 cri.go:89] found id: ""
	I1121 14:53:21.619703  435193 logs.go:282] 0 containers: []
	W1121 14:53:21.619712  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:21.619719  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:21.619800  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:21.647576  435193 cri.go:89] found id: ""
	I1121 14:53:21.647599  435193 logs.go:282] 0 containers: []
	W1121 14:53:21.647607  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:21.647616  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:21.647655  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:21.766527  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:21.766563  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:21.783310  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:21.783392  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:21.851781  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:21.851800  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:21.852790  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:21.886330  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:21.886359  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:21.954691  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:21.954730  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:21.984024  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:21.984052  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:22.061539  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:22.061578  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1121 14:53:22.998176  451036 pod_ready.go:104] pod "etcd-pause-706190" is not "Ready", error: <nil>
	W1121 14:53:24.998262  451036 pod_ready.go:104] pod "etcd-pause-706190" is not "Ready", error: <nil>
	I1121 14:53:26.997175  451036 pod_ready.go:94] pod "etcd-pause-706190" is "Ready"
	I1121 14:53:26.997205  451036 pod_ready.go:86] duration metric: took 6.005361284s for pod "etcd-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:26.999660  451036 pod_ready.go:83] waiting for pod "kube-apiserver-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.007255  451036 pod_ready.go:94] pod "kube-apiserver-pause-706190" is "Ready"
	I1121 14:53:27.007302  451036 pod_ready.go:86] duration metric: took 7.613939ms for pod "kube-apiserver-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.010541  451036 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.016286  451036 pod_ready.go:94] pod "kube-controller-manager-pause-706190" is "Ready"
	I1121 14:53:27.016370  451036 pod_ready.go:86] duration metric: took 5.796546ms for pod "kube-controller-manager-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.019157  451036 pod_ready.go:83] waiting for pod "kube-proxy-hzbpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.195688  451036 pod_ready.go:94] pod "kube-proxy-hzbpc" is "Ready"
	I1121 14:53:27.195717  451036 pod_ready.go:86] duration metric: took 176.533051ms for pod "kube-proxy-hzbpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.396179  451036 pod_ready.go:83] waiting for pod "kube-scheduler-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.797434  451036 pod_ready.go:94] pod "kube-scheduler-pause-706190" is "Ready"
	I1121 14:53:27.797465  451036 pod_ready.go:86] duration metric: took 401.254895ms for pod "kube-scheduler-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.797479  451036 pod_ready.go:40] duration metric: took 13.817700305s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:53:27.878467  451036 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 14:53:27.881608  451036 out.go:179] * Done! kubectl is now configured to use "pause-706190" cluster and "default" namespace by default
	I1121 14:53:24.603458  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:24.603954  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:24.604017  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:24.604097  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:24.632094  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:24.632115  435193 cri.go:89] found id: ""
	I1121 14:53:24.632123  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:24.632186  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:24.635983  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:24.636060  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:24.664231  435193 cri.go:89] found id: ""
	I1121 14:53:24.664257  435193 logs.go:282] 0 containers: []
	W1121 14:53:24.664265  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:24.664272  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:24.664334  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:24.692142  435193 cri.go:89] found id: ""
	I1121 14:53:24.692166  435193 logs.go:282] 0 containers: []
	W1121 14:53:24.692177  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:24.692184  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:24.692252  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:24.721002  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:24.721025  435193 cri.go:89] found id: ""
	I1121 14:53:24.721034  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:24.721093  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:24.724974  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:24.725051  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:24.751805  435193 cri.go:89] found id: ""
	I1121 14:53:24.751831  435193 logs.go:282] 0 containers: []
	W1121 14:53:24.751840  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:24.751847  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:24.751937  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:24.778847  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:24.778870  435193 cri.go:89] found id: ""
	I1121 14:53:24.778878  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:24.778959  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:24.782839  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:24.782966  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:24.808571  435193 cri.go:89] found id: ""
	I1121 14:53:24.808651  435193 logs.go:282] 0 containers: []
	W1121 14:53:24.808680  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:24.808709  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:24.808775  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:24.836097  435193 cri.go:89] found id: ""
	I1121 14:53:24.836122  435193 logs.go:282] 0 containers: []
	W1121 14:53:24.836130  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:24.836140  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:24.836152  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:24.905301  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:24.905336  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:24.932299  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:24.932377  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:24.992486  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:24.992524  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:25.029458  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:25.029493  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:25.162229  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:25.162278  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:25.179973  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:25.180005  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:25.253766  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:25.253786  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:25.253802  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:27.789303  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:27.789769  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:27.789832  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:27.789902  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:27.832225  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:27.832248  435193 cri.go:89] found id: ""
	I1121 14:53:27.832257  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:27.832317  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:27.836793  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:27.836871  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:27.869721  435193 cri.go:89] found id: ""
	I1121 14:53:27.869747  435193 logs.go:282] 0 containers: []
	W1121 14:53:27.869756  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:27.869763  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:27.869821  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:27.946258  435193 cri.go:89] found id: ""
	I1121 14:53:27.946286  435193 logs.go:282] 0 containers: []
	W1121 14:53:27.946295  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:27.946302  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:27.946375  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:27.978748  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:27.978771  435193 cri.go:89] found id: ""
	I1121 14:53:27.978779  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:27.978832  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:27.985644  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:27.985714  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:28.027180  435193 cri.go:89] found id: ""
	I1121 14:53:28.027208  435193 logs.go:282] 0 containers: []
	W1121 14:53:28.027217  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:28.027224  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:28.027284  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:28.070399  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:28.070418  435193 cri.go:89] found id: ""
	I1121 14:53:28.070426  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:28.070483  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:28.074849  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:28.074918  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:28.105114  435193 cri.go:89] found id: ""
	I1121 14:53:28.105136  435193 logs.go:282] 0 containers: []
	W1121 14:53:28.105144  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:28.105151  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:28.105226  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	
	
	==> CRI-O <==
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.58989127Z" level=info msg="Creating container: kube-system/kube-proxy-hzbpc/kube-proxy" id=8f796da3-b3b8-4aa0-98d2-bf6f4c47ad2a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.590173858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.600981628Z" level=info msg="Created container b4215710d430f809041f5bba4f80d28ce0164af530df92c483f86efe43316256: kube-system/kube-apiserver-pause-706190/kube-apiserver" id=956f70ff-a76b-4bb7-a244-e50032aa4625 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.605564218Z" level=info msg="Starting container: b4215710d430f809041f5bba4f80d28ce0164af530df92c483f86efe43316256" id=2b84813a-7777-40db-b19f-f43eafebdc68 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.615282578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.616126807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.626880118Z" level=info msg="Started container" PID=2379 containerID=b4215710d430f809041f5bba4f80d28ce0164af530df92c483f86efe43316256 description=kube-system/kube-apiserver-pause-706190/kube-apiserver id=2b84813a-7777-40db-b19f-f43eafebdc68 name=/runtime.v1.RuntimeService/StartContainer sandboxID=36f556e38e05b9edcb1cc64f5a0c1af7e311f4715802d21f952101daeffbfdac
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.684358903Z" level=info msg="Created container 8fb4ee525b2491a835969d0c178891f19502424e426454f40c716c5bbbeacfab: kube-system/kindnet-w45qn/kindnet-cni" id=3a62500c-d87c-4fdd-86f7-422dca43faac name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.693124184Z" level=info msg="Starting container: 8fb4ee525b2491a835969d0c178891f19502424e426454f40c716c5bbbeacfab" id=fb88e49b-9b35-4bc3-a090-d74fae1b6f02 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.694969804Z" level=info msg="Started container" PID=2391 containerID=8fb4ee525b2491a835969d0c178891f19502424e426454f40c716c5bbbeacfab description=kube-system/kindnet-w45qn/kindnet-cni id=fb88e49b-9b35-4bc3-a090-d74fae1b6f02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be13f76cefe1e3ba107c3adeae0bfb28b6e4dacdcd83e8cd7cac8fbf08008e53
	Nov 21 14:53:09 pause-706190 crio[2068]: time="2025-11-21T14:53:09.104251814Z" level=info msg="Created container 7fd1af596bd962578d4345a6db324de4fb359033a06c21a89e10c5562cf0406c: kube-system/kube-proxy-hzbpc/kube-proxy" id=8f796da3-b3b8-4aa0-98d2-bf6f4c47ad2a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:53:09 pause-706190 crio[2068]: time="2025-11-21T14:53:09.105117212Z" level=info msg="Starting container: 7fd1af596bd962578d4345a6db324de4fb359033a06c21a89e10c5562cf0406c" id=7d2c5e16-4f65-47a3-a8a4-12c707ff60ac name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:53:09 pause-706190 crio[2068]: time="2025-11-21T14:53:09.107640174Z" level=info msg="Started container" PID=2404 containerID=7fd1af596bd962578d4345a6db324de4fb359033a06c21a89e10c5562cf0406c description=kube-system/kube-proxy-hzbpc/kube-proxy id=7d2c5e16-4f65-47a3-a8a4-12c707ff60ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=29237e21d7eb16448f00207288f942e2be7278f2f821198bf3e415aa4f5e04cb
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.036972516Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.041087424Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.041122518Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.041144697Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.044192178Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.044239867Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.044264212Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.047347582Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.047382036Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.047405577Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.050389787Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.05042411Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	7fd1af596bd96       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   29237e21d7eb1       kube-proxy-hzbpc                       kube-system
	8fb4ee525b249       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   be13f76cefe1e       kindnet-w45qn                          kube-system
	b4215710d430f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago       Running             kube-apiserver            1                   36f556e38e05b       kube-apiserver-pause-706190            kube-system
	bfb73c516ae0a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago       Running             kube-controller-manager   1                   4e42d008df7b1       kube-controller-manager-pause-706190   kube-system
	fbdd6d6086f23       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   97fa3155b7e63       kube-scheduler-pause-706190            kube-system
	e9789f6445316       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago       Running             etcd                      1                   e0d356b2681af       etcd-pause-706190                      kube-system
	0271386e341fa       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   ff76054a7fb26       coredns-66bc5c9577-gv42v               kube-system
	040f84d32cdc7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   ff76054a7fb26       coredns-66bc5c9577-gv42v               kube-system
	45d1eb07971cc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   29237e21d7eb1       kube-proxy-hzbpc                       kube-system
	8b6e9299e5d66       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   be13f76cefe1e       kindnet-w45qn                          kube-system
	c933d49a5b407       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   97fa3155b7e63       kube-scheduler-pause-706190            kube-system
	6812ac6759a64       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   e0d356b2681af       etcd-pause-706190                      kube-system
	197f8208783cc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   4e42d008df7b1       kube-controller-manager-pause-706190   kube-system
	850ff406c6df8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   36f556e38e05b       kube-apiserver-pause-706190            kube-system
	
	
	==> coredns [0271386e341fa99a2b463a6333f9aca47fc2de7c5cb39e67ebb2fccf8ffa1a5e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55964 - 625 "HINFO IN 3085672885384044256.6973577108607144418. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033646716s
	
	
	==> coredns [040f84d32cdc7b868103d9e8b5e9e17971b6d790e17758c33a035160a39e7d02] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41994 - 36070 "HINFO IN 275753412209615345.3016881851810483464. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024853678s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-706190
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-706190
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=pause-706190
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_52_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:52:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-706190
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:53:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:52:52 +0000   Fri, 21 Nov 2025 14:51:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:52:52 +0000   Fri, 21 Nov 2025 14:51:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:52:52 +0000   Fri, 21 Nov 2025 14:51:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:52:52 +0000   Fri, 21 Nov 2025 14:52:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-706190
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                aa04f144-e699-4e79-bbe3-e08f8d8ad6bb
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gv42v                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     80s
	  kube-system                 etcd-pause-706190                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         85s
	  kube-system                 kindnet-w45qn                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      80s
	  kube-system                 kube-apiserver-pause-706190             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-pause-706190    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-hzbpc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-706190             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 18s                kube-proxy       
	  Warning  CgroupV1                 93s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  92s (x8 over 93s)  kubelet          Node pause-706190 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s (x8 over 93s)  kubelet          Node pause-706190 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     92s (x8 over 93s)  kubelet          Node pause-706190 status is now: NodeHasSufficientPID
	  Normal   Starting                 86s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 86s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  85s                kubelet          Node pause-706190 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s                kubelet          Node pause-706190 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s                kubelet          Node pause-706190 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s                node-controller  Node pause-706190 event: Registered Node pause-706190 in Controller
	  Normal   NodeReady                39s                kubelet          Node pause-706190 status is now: NodeReady
	  Warning  ContainerGCFailed        26s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           15s                node-controller  Node pause-706190 event: Registered Node pause-706190 in Controller
	
	
	==> dmesg <==
	[Nov21 14:24] overlayfs: idmapped layers are currently not supported
	[Nov21 14:25] overlayfs: idmapped layers are currently not supported
	[  +3.881075] overlayfs: idmapped layers are currently not supported
	[Nov21 14:26] overlayfs: idmapped layers are currently not supported
	[Nov21 14:27] overlayfs: idmapped layers are currently not supported
	[Nov21 14:29] overlayfs: idmapped layers are currently not supported
	[Nov21 14:33] kauditd_printk_skb: 8 callbacks suppressed
	[ +39.333625] overlayfs: idmapped layers are currently not supported
	[Nov21 14:34] overlayfs: idmapped layers are currently not supported
	[Nov21 14:35] overlayfs: idmapped layers are currently not supported
	[Nov21 14:36] overlayfs: idmapped layers are currently not supported
	[Nov21 14:37] overlayfs: idmapped layers are currently not supported
	[Nov21 14:39] overlayfs: idmapped layers are currently not supported
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6812ac6759a64275994e5d4179d3b1c59a354178a5c457f989b66dddfd9abce0] <==
	{"level":"warn","ts":"2025-11-21T14:52:02.238165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:52:02.292589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:52:02.341376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:52:02.368677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:52:02.415969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:52:02.464479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:52:02.557885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39208","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:52:57.365731Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-21T14:52:57.365790Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-706190","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-21T14:52:57.365878Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-21T14:52:57.501910Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-21T14:52:57.501988Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T14:52:57.502028Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-21T14:52:57.502054Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-21T14:52:57.502057Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-21T14:52:57.502186Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-21T14:52:57.502212Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-21T14:52:57.502220Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-21T14:52:57.502254Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-21T14:52:57.502268Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-21T14:52:57.502275Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T14:52:57.505370Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-21T14:52:57.505449Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T14:52:57.505489Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-21T14:52:57.505505Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-706190","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [e9789f6445316da4ede3f914ea50a7bb3356c7e4b2a8fc78aef346401a3881ae] <==
	{"level":"warn","ts":"2025-11-21T14:53:10.738852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.771033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.797012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.825481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.847157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.878705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.916754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.939069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.964043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.993365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.018936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.049381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.076477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.102585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.130213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.161458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.200484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.266477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.295523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.324524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.339202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.389776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.480941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.483617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.648755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56470","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:53:31 up  2:36,  0 user,  load average: 2.04, 2.42, 2.17
	Linux pause-706190 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8b6e9299e5d66efba1babf3908bb853e3ef2453315bc7a675c28ebfadd857a0b] <==
	I1121 14:52:12.112590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:52:12.112855       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:52:12.112978       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:52:12.112996       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:52:12.113010       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:52:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:52:12.401909       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:52:12.402090       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:52:12.402142       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:52:12.403137       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:52:42.402650       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 14:52:42.402895       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 14:52:42.403046       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 14:52:42.403101       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1121 14:52:43.903040       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:52:43.903068       1 metrics.go:72] Registering metrics
	I1121 14:52:43.903145       1 controller.go:711] "Syncing nftables rules"
	I1121 14:52:52.403414       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:52:52.403502       1 main.go:301] handling current node
	
	
	==> kindnet [8fb4ee525b2491a835969d0c178891f19502424e426454f40c716c5bbbeacfab] <==
	I1121 14:53:08.834524       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:53:08.834919       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:53:08.835242       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:53:08.846365       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:53:08.846438       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:53:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:53:09.039619       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:53:09.039738       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:53:09.039775       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:53:09.040802       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:53:13.040891       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:53:13.040924       1 metrics.go:72] Registering metrics
	I1121 14:53:13.040991       1 controller.go:711] "Syncing nftables rules"
	I1121 14:53:19.036535       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:53:19.036616       1 main.go:301] handling current node
	I1121 14:53:29.037890       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:53:29.037924       1 main.go:301] handling current node
	
	
	==> kube-apiserver [850ff406c6df8f8893b6ab5c9796026832713346c5d66cfb49b7adfbe435e36e] <==
	W1121 14:52:57.376807       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.376820       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.376868       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.376916       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.376963       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377005       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377049       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377091       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377135       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377216       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.374208       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.374895       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377911       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377961       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377993       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378034       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378055       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378077       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378102       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378123       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378150       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378174       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378197       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378219       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378239       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b4215710d430f809041f5bba4f80d28ce0164af530df92c483f86efe43316256] <==
	I1121 14:53:12.980737       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:53:13.008697       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1121 14:53:13.018468       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:53:13.018510       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 14:53:13.027741       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1121 14:53:13.027817       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 14:53:13.027827       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 14:53:13.027947       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1121 14:53:13.032923       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1121 14:53:13.033152       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1121 14:53:13.033195       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:53:13.035652       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:53:13.037020       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:53:13.037555       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1121 14:53:13.038207       1 aggregator.go:171] initial CRD sync complete...
	I1121 14:53:13.038249       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:53:13.038257       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:53:13.038264       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:53:13.040360       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1121 14:53:13.710807       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:53:14.884180       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:53:16.282816       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:53:16.526050       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:53:16.577374       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:53:16.676233       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [197f8208783cc8b8d66bcaabe4dafe985f92bc2bb1c5c712bf1bd3332e0271f2] <==
	I1121 14:52:10.349399       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:52:10.349435       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:52:10.349441       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:52:10.349446       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:52:10.351896       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:52:10.364434       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:52:10.364377       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:52:10.369446       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 14:52:10.369489       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:52:10.369572       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 14:52:10.369656       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 14:52:10.369454       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 14:52:10.374559       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-706190" podCIDRs=["10.244.0.0/24"]
	I1121 14:52:10.376353       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:52:10.379693       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:52:10.388492       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:52:10.390609       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:52:10.390709       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:52:10.390613       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:52:10.390656       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:52:10.390626       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:52:10.391932       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:52:10.395332       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:52:10.395421       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:52:55.352662       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [bfb73c516ae0adef4c04d9b39aacb5990ae73b4cfa9b6fd7f5696465b6a4b222] <==
	I1121 14:53:16.268786       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:53:16.268826       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:53:16.268869       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:53:16.270866       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:53:16.272004       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:53:16.274640       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:53:16.274700       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:53:16.276840       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:53:16.289201       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:53:16.304832       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:53:16.308949       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:53:16.311734       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:53:16.313485       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:53:16.317840       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:53:16.317857       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:53:16.319083       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:53:16.319090       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:53:16.320243       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 14:53:16.320347       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 14:53:16.320545       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-706190"
	I1121 14:53:16.320595       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1121 14:53:16.324494       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:53:16.327198       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:53:16.330935       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:53:16.333211       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [45d1eb07971cca34df63cc22e950e71d40b97c9098663f5e60130d5a971a5bdc] <==
	I1121 14:52:12.773152       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:52:12.857908       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:52:12.958945       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:52:12.958979       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:52:12.959062       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:52:12.978360       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:52:12.978415       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:52:12.982435       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:52:12.982736       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:52:12.982804       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:52:12.986552       1 config.go:200] "Starting service config controller"
	I1121 14:52:12.986630       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:52:12.987683       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:52:12.987770       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:52:12.987812       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:52:12.987838       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:52:12.992138       1 config.go:309] "Starting node config controller"
	I1121 14:52:12.992223       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:52:12.992257       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:52:13.088136       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:52:13.088346       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:52:13.088801       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [7fd1af596bd962578d4345a6db324de4fb359033a06c21a89e10c5562cf0406c] <==
	I1121 14:53:11.482586       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:53:12.977444       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:53:13.078412       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:53:13.078505       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:53:13.078603       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:53:13.227715       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:53:13.227828       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:53:13.245627       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:53:13.245967       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:53:13.245993       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:53:13.247112       1 config.go:200] "Starting service config controller"
	I1121 14:53:13.247179       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:53:13.257158       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:53:13.257245       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:53:13.257289       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:53:13.257334       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:53:13.260744       1 config.go:309] "Starting node config controller"
	I1121 14:53:13.260831       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:53:13.260863       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:53:13.347959       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:53:13.358355       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:53:13.358361       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c933d49a5b407943549354e3a9e5fbb544091961370b604289633563c7439472] <==
	E1121 14:52:04.225545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:52:04.225635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:52:04.225741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:52:04.225816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:52:04.225931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:52:04.228182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:52:04.228548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:52:04.228683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:52:04.228832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:52:04.228948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 14:52:04.229177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:52:04.229262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:52:04.234246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:52:04.234401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:52:04.234509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:52:04.234598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:52:04.234730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:52:04.234916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1121 14:52:05.405423       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:52:57.353218       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1121 14:52:57.353244       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1121 14:52:57.353264       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1121 14:52:57.353301       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:52:57.353478       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1121 14:52:57.353494       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fbdd6d6086f23497b1695b35d4843d7323a4ff9df5621baeda893ea64d511a23] <==
	I1121 14:53:11.207299       1 serving.go:386] Generated self-signed cert in-memory
	I1121 14:53:13.255338       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:53:13.255370       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:53:13.266143       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 14:53:13.266248       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 14:53:13.266320       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:53:13.266349       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:53:13.266410       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:53:13.266437       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:53:13.266556       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:53:13.266623       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:53:13.367322       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 14:53:13.367439       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:53:13.368329       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.349107    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ca5f2bab7f47ea2d6582c273e4cdc251" pod="kube-system/kube-scheduler-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.349427    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gv42v\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="32e6ea19-296d-433e-ab3e-7e992350c3c2" pod="kube-system/coredns-66bc5c9577-gv42v"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.349756    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="48e6e2eed4477e7c5ce26e1c1c6d3548" pod="kube-system/kube-apiserver-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: I1121 14:53:08.466797    1311 scope.go:117] "RemoveContainer" containerID="8b6e9299e5d66efba1babf3908bb853e3ef2453315bc7a675c28ebfadd857a0b"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.470586    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-w45qn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="82b593e6-c11c-40d5-b942-033d29c7abd1" pod="kube-system/kindnet-w45qn"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.470988    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gv42v\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="32e6ea19-296d-433e-ab3e-7e992350c3c2" pod="kube-system/coredns-66bc5c9577-gv42v"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.471373    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="48e6e2eed4477e7c5ce26e1c1c6d3548" pod="kube-system/kube-apiserver-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.471691    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c15f6a2cc4915b5487781749323e86ff" pod="kube-system/etcd-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.471986    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3207d4d2d11971cbf6bbc243763a30e6" pod="kube-system/kube-controller-manager-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.472534    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ca5f2bab7f47ea2d6582c273e4cdc251" pod="kube-system/kube-scheduler-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: I1121 14:53:08.531899    1311 scope.go:117] "RemoveContainer" containerID="45d1eb07971cca34df63cc22e950e71d40b97c9098663f5e60130d5a971a5bdc"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.543536    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="48e6e2eed4477e7c5ce26e1c1c6d3548" pod="kube-system/kube-apiserver-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.558320    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c15f6a2cc4915b5487781749323e86ff" pod="kube-system/etcd-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.558665    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3207d4d2d11971cbf6bbc243763a30e6" pod="kube-system/kube-controller-manager-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.558855    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ca5f2bab7f47ea2d6582c273e4cdc251" pod="kube-system/kube-scheduler-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.559026    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzbpc\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1276e562-5617-4a13-af4d-f386a07e45d1" pod="kube-system/kube-proxy-hzbpc"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.559189    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-w45qn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="82b593e6-c11c-40d5-b942-033d29c7abd1" pod="kube-system/kindnet-w45qn"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.559374    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gv42v\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="32e6ea19-296d-433e-ab3e-7e992350c3c2" pod="kube-system/coredns-66bc5c9577-gv42v"
	Nov 21 14:53:12 pause-706190 kubelet[1311]: E1121 14:53:12.849805    1311 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-706190\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-706190' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 21 14:53:12 pause-706190 kubelet[1311]: E1121 14:53:12.850408    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-706190\" is forbidden: User \"system:node:pause-706190\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-706190' and this object" podUID="48e6e2eed4477e7c5ce26e1c1c6d3548" pod="kube-system/kube-apiserver-pause-706190"
	Nov 21 14:53:12 pause-706190 kubelet[1311]: E1121 14:53:12.915901    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-706190\" is forbidden: User \"system:node:pause-706190\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-706190' and this object" podUID="c15f6a2cc4915b5487781749323e86ff" pod="kube-system/etcd-pause-706190"
	Nov 21 14:53:16 pause-706190 kubelet[1311]: W1121 14:53:16.238048    1311 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 21 14:53:28 pause-706190 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:53:28 pause-706190 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:53:28 pause-706190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-706190 -n pause-706190
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-706190 -n pause-706190: exit status 2 (361.621374ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-706190 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-706190
helpers_test.go:243: (dbg) docker inspect pause-706190:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5",
	        "Created": "2025-11-21T14:51:39.42172846Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 446929,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:51:39.497727295Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5/hostname",
	        "HostsPath": "/var/lib/docker/containers/825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5/hosts",
	        "LogPath": "/var/lib/docker/containers/825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5/825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5-json.log",
	        "Name": "/pause-706190",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-706190:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-706190",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "825f44f6e1cdab5ae32eb1c4d0ec2963cac1a6f23d24a01005a87142133e3ad5",
	                "LowerDir": "/var/lib/docker/overlay2/ae5cba5c9d043c50cfa2963d11dc3a54d992a67e25d2d84684be2f6df851234c-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae5cba5c9d043c50cfa2963d11dc3a54d992a67e25d2d84684be2f6df851234c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae5cba5c9d043c50cfa2963d11dc3a54d992a67e25d2d84684be2f6df851234c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae5cba5c9d043c50cfa2963d11dc3a54d992a67e25d2d84684be2f6df851234c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-706190",
	                "Source": "/var/lib/docker/volumes/pause-706190/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-706190",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-706190",
	                "name.minikube.sigs.k8s.io": "pause-706190",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08f07c3af72b4d6ddd9525e136d39244cda3f6e5ed0f6b22caa8ccfb53a44442",
	            "SandboxKey": "/var/run/docker/netns/08f07c3af72b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-706190": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:88:7b:03:30:f1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26798d30f93b1e0bf415d72857a44bd1fc90420e58b51ac06cac18b61d8f7e46",
	                    "EndpointID": "f9d6051517240099d960b5c391f09cb764f9699dd391246859d6556f80b1ef87",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-706190",
	                        "825f44f6e1cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-706190 -n pause-706190
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-706190 -n pause-706190: exit status 2 (352.559022ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-706190 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-706190 logs -n 25: (1.405461901s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-140266 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:47 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p missing-upgrade-036945 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-036945    │ jenkins │ v1.32.0 │ 21 Nov 25 14:47 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p NoKubernetes-140266 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ delete  │ -p NoKubernetes-140266                                                                                                                   │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p NoKubernetes-140266 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ ssh     │ -p NoKubernetes-140266 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │                     │
	│ stop    │ -p NoKubernetes-140266                                                                                                                   │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p NoKubernetes-140266 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p missing-upgrade-036945 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-036945    │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:49 UTC │
	│ ssh     │ -p NoKubernetes-140266 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │                     │
	│ delete  │ -p NoKubernetes-140266                                                                                                                   │ NoKubernetes-140266       │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p kubernetes-upgrade-886613 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-886613 │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:49 UTC │
	│ stop    │ -p kubernetes-upgrade-886613                                                                                                             │ kubernetes-upgrade-886613 │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ start   │ -p kubernetes-upgrade-886613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-886613 │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │                     │
	│ delete  │ -p missing-upgrade-036945                                                                                                                │ missing-upgrade-036945    │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ start   │ -p stopped-upgrade-489557 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-489557    │ jenkins │ v1.32.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ stop    │ stopped-upgrade-489557 stop                                                                                                              │ stopped-upgrade-489557    │ jenkins │ v1.32.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:50 UTC │
	│ start   │ -p stopped-upgrade-489557 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-489557    │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ delete  │ -p stopped-upgrade-489557                                                                                                                │ stopped-upgrade-489557    │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ start   │ -p running-upgrade-913045 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-913045    │ jenkins │ v1.32.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:51 UTC │
	│ start   │ -p running-upgrade-913045 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-913045    │ jenkins │ v1.37.0 │ 21 Nov 25 14:51 UTC │ 21 Nov 25 14:51 UTC │
	│ delete  │ -p running-upgrade-913045                                                                                                                │ running-upgrade-913045    │ jenkins │ v1.37.0 │ 21 Nov 25 14:51 UTC │ 21 Nov 25 14:51 UTC │
	│ start   │ -p pause-706190 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-706190              │ jenkins │ v1.37.0 │ 21 Nov 25 14:51 UTC │ 21 Nov 25 14:52 UTC │
	│ start   │ -p pause-706190 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-706190              │ jenkins │ v1.37.0 │ 21 Nov 25 14:52 UTC │ 21 Nov 25 14:53 UTC │
	│ pause   │ -p pause-706190 --alsologtostderr -v=5                                                                                                   │ pause-706190              │ jenkins │ v1.37.0 │ 21 Nov 25 14:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:52:56
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:52:56.065107  451036 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:52:56.065317  451036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:52:56.065350  451036 out.go:374] Setting ErrFile to fd 2...
	I1121 14:52:56.065371  451036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:52:56.065618  451036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:52:56.065998  451036 out.go:368] Setting JSON to false
	I1121 14:52:56.067158  451036 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9328,"bootTime":1763727448,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 14:52:56.067262  451036 start.go:143] virtualization:  
	I1121 14:52:56.070582  451036 out.go:179] * [pause-706190] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:52:56.074554  451036 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:52:56.074640  451036 notify.go:221] Checking for updates...
	I1121 14:52:56.081382  451036 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:52:56.084437  451036 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:52:56.087421  451036 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 14:52:56.090449  451036 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:52:56.093575  451036 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:52:56.097186  451036 config.go:182] Loaded profile config "pause-706190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:52:56.097905  451036 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:52:56.136595  451036 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:52:56.136781  451036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:52:56.210991  451036 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 14:52:56.201824302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:52:56.211107  451036 docker.go:319] overlay module found
	I1121 14:52:56.214226  451036 out.go:179] * Using the docker driver based on existing profile
	I1121 14:52:56.217086  451036 start.go:309] selected driver: docker
	I1121 14:52:56.217111  451036 start.go:930] validating driver "docker" against &{Name:pause-706190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-706190 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:52:56.217250  451036 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:52:56.217358  451036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:52:56.273999  451036 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 14:52:56.263527792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:52:56.274405  451036 cni.go:84] Creating CNI manager for ""
	I1121 14:52:56.274476  451036 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:52:56.274529  451036 start.go:353] cluster config:
	{Name:pause-706190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-706190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:52:56.277730  451036 out.go:179] * Starting "pause-706190" primary control-plane node in "pause-706190" cluster
	I1121 14:52:56.280580  451036 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:52:56.283509  451036 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:52:56.286363  451036 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:52:56.286414  451036 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 14:52:56.286428  451036 cache.go:65] Caching tarball of preloaded images
	I1121 14:52:56.286440  451036 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:52:56.286512  451036 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 14:52:56.286521  451036 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:52:56.286657  451036 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/config.json ...
	I1121 14:52:56.305912  451036 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:52:56.305935  451036 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:52:56.305952  451036 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:52:56.305976  451036 start.go:360] acquireMachinesLock for pause-706190: {Name:mk7d0e7547f55706e743e0e645a87e32329e26b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:52:56.306044  451036 start.go:364] duration metric: took 40.337µs to acquireMachinesLock for "pause-706190"
	I1121 14:52:56.306067  451036 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:52:56.306077  451036 fix.go:54] fixHost starting: 
	I1121 14:52:56.306330  451036 cli_runner.go:164] Run: docker container inspect pause-706190 --format={{.State.Status}}
	I1121 14:52:56.326051  451036 fix.go:112] recreateIfNeeded on pause-706190: state=Running err=<nil>
	W1121 14:52:56.326097  451036 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:52:54.542810  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:52:54.905519  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:56582->192.168.76.2:8443: read: connection reset by peer
	I1121 14:52:54.905577  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:52:54.905640  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:52:54.938180  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:52:54.938201  435193 cri.go:89] found id: "14ea6f8fd4272c68cf4137c6d5ef40c99cf24d7a773c2dbf35175ede2a6ad591"
	I1121 14:52:54.938206  435193 cri.go:89] found id: ""
	I1121 14:52:54.938214  435193 logs.go:282] 2 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e 14ea6f8fd4272c68cf4137c6d5ef40c99cf24d7a773c2dbf35175ede2a6ad591]
	I1121 14:52:54.938271  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:54.942258  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:54.945992  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:52:54.946061  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:52:54.976641  435193 cri.go:89] found id: ""
	I1121 14:52:54.976664  435193 logs.go:282] 0 containers: []
	W1121 14:52:54.976673  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:52:54.976679  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:52:54.976739  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:52:55.021705  435193 cri.go:89] found id: ""
	I1121 14:52:55.021731  435193 logs.go:282] 0 containers: []
	W1121 14:52:55.021741  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:52:55.021748  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:52:55.021814  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:52:55.052878  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:52:55.052900  435193 cri.go:89] found id: ""
	I1121 14:52:55.052908  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:52:55.052970  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:55.057203  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:52:55.057278  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:52:55.086002  435193 cri.go:89] found id: ""
	I1121 14:52:55.086025  435193 logs.go:282] 0 containers: []
	W1121 14:52:55.086033  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:52:55.086039  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:52:55.086100  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:52:55.113706  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:52:55.113735  435193 cri.go:89] found id: "a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384"
	I1121 14:52:55.113741  435193 cri.go:89] found id: ""
	I1121 14:52:55.113748  435193 logs.go:282] 2 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49 a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384]
	I1121 14:52:55.113806  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:55.117617  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:55.121179  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:52:55.121247  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:52:55.150078  435193 cri.go:89] found id: ""
	I1121 14:52:55.150145  435193 logs.go:282] 0 containers: []
	W1121 14:52:55.150169  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:52:55.150190  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:52:55.150270  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:52:55.178066  435193 cri.go:89] found id: ""
	I1121 14:52:55.178097  435193 logs.go:282] 0 containers: []
	W1121 14:52:55.178106  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:52:55.178121  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:52:55.178136  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:52:55.211356  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:52:55.211386  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:52:55.242691  435193 logs.go:123] Gathering logs for kube-controller-manager [a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384] ...
	I1121 14:52:55.242727  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384"
	I1121 14:52:55.278465  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:52:55.278491  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:52:55.315131  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:52:55.315161  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:52:55.433791  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:52:55.433828  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:52:55.451881  435193 logs.go:123] Gathering logs for kube-apiserver [14ea6f8fd4272c68cf4137c6d5ef40c99cf24d7a773c2dbf35175ede2a6ad591] ...
	I1121 14:52:55.451909  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 14ea6f8fd4272c68cf4137c6d5ef40c99cf24d7a773c2dbf35175ede2a6ad591"
	I1121 14:52:55.490588  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:52:55.490622  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:52:55.551541  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:52:55.551576  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:52:55.623350  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:52:55.623387  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:52:55.707579  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:52:56.329293  451036 out.go:252] * Updating the running docker "pause-706190" container ...
	I1121 14:52:56.329328  451036 machine.go:94] provisionDockerMachine start ...
	I1121 14:52:56.329410  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:52:56.347461  451036 main.go:143] libmachine: Using SSH client type: native
	I1121 14:52:56.347790  451036 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1121 14:52:56.347805  451036 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:52:56.491981  451036 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-706190
	
	I1121 14:52:56.492006  451036 ubuntu.go:182] provisioning hostname "pause-706190"
	I1121 14:52:56.492076  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:52:56.510696  451036 main.go:143] libmachine: Using SSH client type: native
	I1121 14:52:56.511059  451036 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1121 14:52:56.511087  451036 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-706190 && echo "pause-706190" | sudo tee /etc/hostname
	I1121 14:52:56.665950  451036 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-706190
	
	I1121 14:52:56.666094  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:52:56.685575  451036 main.go:143] libmachine: Using SSH client type: native
	I1121 14:52:56.685903  451036 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1121 14:52:56.685925  451036 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-706190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-706190/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-706190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:52:56.832865  451036 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:52:56.832955  451036 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 14:52:56.832997  451036 ubuntu.go:190] setting up certificates
	I1121 14:52:56.833023  451036 provision.go:84] configureAuth start
	I1121 14:52:56.833101  451036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-706190
	I1121 14:52:56.851761  451036 provision.go:143] copyHostCerts
	I1121 14:52:56.851832  451036 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 14:52:56.851847  451036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 14:52:56.851926  451036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 14:52:56.852022  451036 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 14:52:56.852027  451036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 14:52:56.852052  451036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 14:52:56.852106  451036 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 14:52:56.852110  451036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 14:52:56.852132  451036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 14:52:56.852177  451036 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.pause-706190 san=[127.0.0.1 192.168.85.2 localhost minikube pause-706190]
	I1121 14:52:56.988146  451036 provision.go:177] copyRemoteCerts
	I1121 14:52:56.988228  451036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:52:56.988273  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:52:57.011415  451036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/pause-706190/id_rsa Username:docker}
	I1121 14:52:57.112338  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:52:57.130964  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1121 14:52:57.149380  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:52:57.168555  451036 provision.go:87] duration metric: took 335.496183ms to configureAuth
	I1121 14:52:57.168642  451036 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:52:57.168877  451036 config.go:182] Loaded profile config "pause-706190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:52:57.168990  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:52:57.186886  451036 main.go:143] libmachine: Using SSH client type: native
	I1121 14:52:57.187196  451036 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1121 14:52:57.187216  451036 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:52:58.207904  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:52:58.208372  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:52:58.208439  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:52:58.208492  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:52:58.246593  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:52:58.246616  435193 cri.go:89] found id: ""
	I1121 14:52:58.246624  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:52:58.246678  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:58.250153  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:52:58.250238  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:52:58.276697  435193 cri.go:89] found id: ""
	I1121 14:52:58.276720  435193 logs.go:282] 0 containers: []
	W1121 14:52:58.276728  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:52:58.276735  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:52:58.276791  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:52:58.304879  435193 cri.go:89] found id: ""
	I1121 14:52:58.304901  435193 logs.go:282] 0 containers: []
	W1121 14:52:58.304911  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:52:58.304917  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:52:58.304974  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:52:58.329819  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:52:58.329841  435193 cri.go:89] found id: ""
	I1121 14:52:58.329850  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:52:58.329930  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:58.333591  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:52:58.333662  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:52:58.359694  435193 cri.go:89] found id: ""
	I1121 14:52:58.359718  435193 logs.go:282] 0 containers: []
	W1121 14:52:58.359727  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:52:58.359733  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:52:58.359796  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:52:58.386232  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:52:58.386253  435193 cri.go:89] found id: "a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384"
	I1121 14:52:58.386258  435193 cri.go:89] found id: ""
	I1121 14:52:58.386265  435193 logs.go:282] 2 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49 a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384]
	I1121 14:52:58.386321  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:58.390076  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:52:58.393565  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:52:58.393640  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:52:58.420067  435193 cri.go:89] found id: ""
	I1121 14:52:58.420092  435193 logs.go:282] 0 containers: []
	W1121 14:52:58.420101  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:52:58.420107  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:52:58.420166  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:52:58.447563  435193 cri.go:89] found id: ""
	I1121 14:52:58.447590  435193 logs.go:282] 0 containers: []
	W1121 14:52:58.447599  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:52:58.447615  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:52:58.447630  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:52:58.565131  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:52:58.565170  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:52:58.629023  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:52:58.629058  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:52:58.694716  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:52:58.694759  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:52:58.726152  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:52:58.726182  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:52:58.742023  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:52:58.742053  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:52:58.811745  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:52:58.811776  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:52:58.811791  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:52:58.845213  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:52:58.845247  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:52:58.871072  435193 logs.go:123] Gathering logs for kube-controller-manager [a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384] ...
	I1121 14:52:58.871100  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384"
	I1121 14:53:01.398720  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:01.399403  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:01.399455  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:01.399515  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:01.428361  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:01.428406  435193 cri.go:89] found id: ""
	I1121 14:53:01.428415  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:01.428478  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:01.432298  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:01.432448  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:01.460955  435193 cri.go:89] found id: ""
	I1121 14:53:01.460982  435193 logs.go:282] 0 containers: []
	W1121 14:53:01.460993  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:01.461000  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:01.461068  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:01.487688  435193 cri.go:89] found id: ""
	I1121 14:53:01.487757  435193 logs.go:282] 0 containers: []
	W1121 14:53:01.487780  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:01.487799  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:01.487887  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:01.515382  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:01.515449  435193 cri.go:89] found id: ""
	I1121 14:53:01.515471  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:01.515558  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:01.519532  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:01.519647  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:01.551715  435193 cri.go:89] found id: ""
	I1121 14:53:01.551783  435193 logs.go:282] 0 containers: []
	W1121 14:53:01.551805  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:01.551826  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:01.551918  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:01.579905  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:01.579972  435193 cri.go:89] found id: "a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384"
	I1121 14:53:01.579990  435193 cri.go:89] found id: ""
	I1121 14:53:01.580012  435193 logs.go:282] 2 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49 a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384]
	I1121 14:53:01.580097  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:01.583973  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:01.587512  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:01.587592  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:01.618168  435193 cri.go:89] found id: ""
	I1121 14:53:01.618245  435193 logs.go:282] 0 containers: []
	W1121 14:53:01.618261  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:01.618268  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:01.618328  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:01.644068  435193 cri.go:89] found id: ""
	I1121 14:53:01.644092  435193 logs.go:282] 0 containers: []
	W1121 14:53:01.644101  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:01.644114  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:01.644125  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:01.703328  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:01.703369  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:01.733878  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:01.733967  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:01.759140  435193 logs.go:123] Gathering logs for kube-controller-manager [a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384] ...
	I1121 14:53:01.759214  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a68b7c06b4a36ae5f5fe5163b0b19d4e7878253542183d4a507a839941fb8384"
	I1121 14:53:01.785218  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:01.785304  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:01.907393  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:01.907429  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:01.924839  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:01.924877  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:01.996481  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:01.996502  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:01.996516  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:02.036943  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:02.036978  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:02.569573  451036 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:53:02.569595  451036 machine.go:97] duration metric: took 6.240258187s to provisionDockerMachine
	I1121 14:53:02.569606  451036 start.go:293] postStartSetup for "pause-706190" (driver="docker")
	I1121 14:53:02.569617  451036 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:53:02.569687  451036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:53:02.569735  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:53:02.588916  451036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/pause-706190/id_rsa Username:docker}
	I1121 14:53:02.696467  451036 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:53:02.699771  451036 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:53:02.699798  451036 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:53:02.699813  451036 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 14:53:02.699870  451036 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 14:53:02.699951  451036 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 14:53:02.700063  451036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:53:02.707412  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:53:02.725035  451036 start.go:296] duration metric: took 155.41291ms for postStartSetup
	I1121 14:53:02.725115  451036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:53:02.725156  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:53:02.741748  451036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/pause-706190/id_rsa Username:docker}
	I1121 14:53:02.838133  451036 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:53:02.843244  451036 fix.go:56] duration metric: took 6.537158791s for fixHost
	I1121 14:53:02.843273  451036 start.go:83] releasing machines lock for "pause-706190", held for 6.537217475s
	I1121 14:53:02.843351  451036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-706190
	I1121 14:53:02.860298  451036 ssh_runner.go:195] Run: cat /version.json
	I1121 14:53:02.860355  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:53:02.860672  451036 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:53:02.860740  451036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-706190
	I1121 14:53:02.886984  451036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/pause-706190/id_rsa Username:docker}
	I1121 14:53:02.889216  451036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/pause-706190/id_rsa Username:docker}
	I1121 14:53:02.992272  451036 ssh_runner.go:195] Run: systemctl --version
	I1121 14:53:03.086781  451036 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:53:03.131808  451036 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:53:03.136338  451036 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:53:03.136445  451036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:53:03.144768  451036 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:53:03.144801  451036 start.go:496] detecting cgroup driver to use...
	I1121 14:53:03.144835  451036 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:53:03.144886  451036 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:53:03.161349  451036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:53:03.175044  451036 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:53:03.175134  451036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:53:03.191627  451036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:53:03.205556  451036 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:53:03.353521  451036 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:53:03.493693  451036 docker.go:234] disabling docker service ...
	I1121 14:53:03.493855  451036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:53:03.509649  451036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:53:03.524224  451036 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:53:03.662486  451036 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:53:03.802937  451036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:53:03.816342  451036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:53:03.832467  451036 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:53:03.832579  451036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.842840  451036 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 14:53:03.842911  451036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.852897  451036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.862619  451036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.872550  451036 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:53:03.886511  451036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.896662  451036 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.906291  451036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:53:03.916629  451036 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:53:03.925730  451036 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:53:03.933711  451036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:53:04.069410  451036 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:53:04.303469  451036 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:53:04.303552  451036 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:53:04.307510  451036 start.go:564] Will wait 60s for crictl version
	I1121 14:53:04.307573  451036 ssh_runner.go:195] Run: which crictl
	I1121 14:53:04.311217  451036 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:53:04.334888  451036 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:53:04.334983  451036 ssh_runner.go:195] Run: crio --version
	I1121 14:53:04.366743  451036 ssh_runner.go:195] Run: crio --version
	I1121 14:53:04.399084  451036 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:53:04.402039  451036 cli_runner.go:164] Run: docker network inspect pause-706190 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:53:04.418007  451036 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:53:04.421985  451036 kubeadm.go:884] updating cluster {Name:pause-706190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-706190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:53:04.422144  451036 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:53:04.422202  451036 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:53:04.454058  451036 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:53:04.454080  451036 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:53:04.454146  451036 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:53:04.479696  451036 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:53:04.479772  451036 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:53:04.479802  451036 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:53:04.479933  451036 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-706190 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-706190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:53:04.480034  451036 ssh_runner.go:195] Run: crio config
	I1121 14:53:04.547478  451036 cni.go:84] Creating CNI manager for ""
	I1121 14:53:04.547547  451036 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:53:04.547584  451036 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:53:04.547634  451036 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-706190 NodeName:pause-706190 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:53:04.547799  451036 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-706190"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:53:04.548108  451036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:53:04.557562  451036 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:53:04.557715  451036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:53:04.565568  451036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1121 14:53:04.578730  451036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:53:04.591810  451036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1121 14:53:04.606286  451036 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:53:04.611154  451036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:53:04.792016  451036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:53:04.808585  451036 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190 for IP: 192.168.85.2
	I1121 14:53:04.808624  451036 certs.go:195] generating shared ca certs ...
	I1121 14:53:04.808642  451036 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:53:04.808821  451036 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 14:53:04.808889  451036 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 14:53:04.808912  451036 certs.go:257] generating profile certs ...
	I1121 14:53:04.809033  451036 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/client.key
	I1121 14:53:04.809124  451036 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/apiserver.key.65068416
	I1121 14:53:04.809192  451036 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/proxy-client.key
	I1121 14:53:04.809323  451036 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 14:53:04.809380  451036 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 14:53:04.809398  451036 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:53:04.809448  451036 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:53:04.809494  451036 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:53:04.809529  451036 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 14:53:04.809593  451036 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:53:04.810246  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:53:04.838311  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:53:04.861904  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:53:04.884071  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:53:04.905039  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 14:53:04.929395  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:53:04.950168  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:53:04.970400  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:53:04.991020  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:53:05.014739  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 14:53:05.038793  451036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 14:53:05.069504  451036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:53:05.085711  451036 ssh_runner.go:195] Run: openssl version
	I1121 14:53:05.093659  451036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:53:05.106241  451036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:53:05.112009  451036 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:53:05.112106  451036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:53:05.159990  451036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:53:05.172168  451036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 14:53:05.185530  451036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 14:53:05.190147  451036 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 14:53:05.190243  451036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 14:53:05.236823  451036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 14:53:05.246176  451036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 14:53:05.255628  451036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 14:53:05.259834  451036 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 14:53:05.259927  451036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 14:53:05.306640  451036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:53:05.320555  451036 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:53:05.325523  451036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:53:05.372212  451036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:53:05.414483  451036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:53:05.458511  451036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:53:05.500637  451036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:53:05.542403  451036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:53:05.594088  451036 kubeadm.go:401] StartCluster: {Name:pause-706190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-706190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:53:05.594205  451036 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:53:05.594270  451036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:53:05.624133  451036 cri.go:89] found id: "040f84d32cdc7b868103d9e8b5e9e17971b6d790e17758c33a035160a39e7d02"
	I1121 14:53:05.624153  451036 cri.go:89] found id: "45d1eb07971cca34df63cc22e950e71d40b97c9098663f5e60130d5a971a5bdc"
	I1121 14:53:05.624158  451036 cri.go:89] found id: "8b6e9299e5d66efba1babf3908bb853e3ef2453315bc7a675c28ebfadd857a0b"
	I1121 14:53:05.624161  451036 cri.go:89] found id: "c933d49a5b407943549354e3a9e5fbb544091961370b604289633563c7439472"
	I1121 14:53:05.624164  451036 cri.go:89] found id: "6812ac6759a64275994e5d4179d3b1c59a354178a5c457f989b66dddfd9abce0"
	I1121 14:53:05.624167  451036 cri.go:89] found id: "197f8208783cc8b8d66bcaabe4dafe985f92bc2bb1c5c712bf1bd3332e0271f2"
	I1121 14:53:05.624171  451036 cri.go:89] found id: "850ff406c6df8f8893b6ab5c9796026832713346c5d66cfb49b7adfbe435e36e"
	I1121 14:53:05.624174  451036 cri.go:89] found id: ""
	I1121 14:53:05.624227  451036 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:53:05.635247  451036 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:53:05Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:53:05.635329  451036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:53:05.643802  451036 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:53:05.643822  451036 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:53:05.643874  451036 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:53:05.651557  451036 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:53:05.652185  451036 kubeconfig.go:125] found "pause-706190" server: "https://192.168.85.2:8443"
	I1121 14:53:05.653094  451036 kapi.go:59] client config for pause-706190: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/client.crt", KeyFile:"/home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/client.key", CAFile:"/home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21278a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1121 14:53:05.653588  451036 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1121 14:53:05.653607  451036 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1121 14:53:05.653613  451036 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1121 14:53:05.653618  451036 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1121 14:53:05.653630  451036 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1121 14:53:05.653924  451036 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:53:05.661764  451036 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 14:53:05.661836  451036 kubeadm.go:602] duration metric: took 18.00741ms to restartPrimaryControlPlane
	I1121 14:53:05.661853  451036 kubeadm.go:403] duration metric: took 67.776017ms to StartCluster
	I1121 14:53:05.661870  451036 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:53:05.661935  451036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:53:05.662806  451036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:53:05.663031  451036 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:53:05.663384  451036 config.go:182] Loaded profile config "pause-706190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:53:05.663447  451036 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:53:05.669390  451036 out.go:179] * Verifying Kubernetes components...
	I1121 14:53:05.669398  451036 out.go:179] * Enabled addons: 
	I1121 14:53:05.672142  451036 addons.go:530] duration metric: took 8.691492ms for enable addons: enabled=[]
	I1121 14:53:05.672175  451036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:53:05.798866  451036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:53:05.812140  451036 node_ready.go:35] waiting up to 6m0s for node "pause-706190" to be "Ready" ...
	I1121 14:53:04.600492  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:04.600923  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:04.600966  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:04.601024  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:04.632630  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:04.632651  435193 cri.go:89] found id: ""
	I1121 14:53:04.632659  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:04.632718  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:04.636298  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:04.636369  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:04.695503  435193 cri.go:89] found id: ""
	I1121 14:53:04.695524  435193 logs.go:282] 0 containers: []
	W1121 14:53:04.695532  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:04.695538  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:04.695598  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:04.732507  435193 cri.go:89] found id: ""
	I1121 14:53:04.732527  435193 logs.go:282] 0 containers: []
	W1121 14:53:04.732536  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:04.732542  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:04.732599  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:04.760115  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:04.760133  435193 cri.go:89] found id: ""
	I1121 14:53:04.760141  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:04.760206  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:04.764288  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:04.764356  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:04.791875  435193 cri.go:89] found id: ""
	I1121 14:53:04.791962  435193 logs.go:282] 0 containers: []
	W1121 14:53:04.791992  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:04.792015  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:04.792104  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:04.828461  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:04.828480  435193 cri.go:89] found id: ""
	I1121 14:53:04.828487  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:04.828543  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:04.833440  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:04.833509  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:04.867319  435193 cri.go:89] found id: ""
	I1121 14:53:04.867346  435193 logs.go:282] 0 containers: []
	W1121 14:53:04.867355  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:04.867362  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:04.867422  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:04.899290  435193 cri.go:89] found id: ""
	I1121 14:53:04.899396  435193 logs.go:282] 0 containers: []
	W1121 14:53:04.899421  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:04.899442  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:04.899475  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:04.919229  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:04.919308  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:05.014423  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:05.014492  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:05.014525  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:05.061532  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:05.061606  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:05.152663  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:05.152744  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:05.193683  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:05.193710  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:05.280177  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:05.280214  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:05.335094  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:05.335127  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:07.966434  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:07.966905  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:07.966955  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:07.967030  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:07.994721  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:07.994745  435193 cri.go:89] found id: ""
	I1121 14:53:07.994753  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:07.994811  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:07.998572  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:07.998644  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:08.031363  435193 cri.go:89] found id: ""
	I1121 14:53:08.031386  435193 logs.go:282] 0 containers: []
	W1121 14:53:08.031395  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:08.031403  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:08.031472  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:08.058670  435193 cri.go:89] found id: ""
	I1121 14:53:08.058695  435193 logs.go:282] 0 containers: []
	W1121 14:53:08.058705  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:08.058712  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:08.058772  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:08.088766  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:08.088789  435193 cri.go:89] found id: ""
	I1121 14:53:08.088797  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:08.088864  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:08.092703  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:08.092830  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:08.118572  435193 cri.go:89] found id: ""
	I1121 14:53:08.118597  435193 logs.go:282] 0 containers: []
	W1121 14:53:08.118607  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:08.118614  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:08.118673  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1121 14:53:07.813403  451036 node_ready.go:55] error getting node "pause-706190" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/pause-706190": dial tcp 192.168.85.2:8443: connect: connection refused
	I1121 14:53:08.146035  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:08.146059  435193 cri.go:89] found id: ""
	I1121 14:53:08.146068  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:08.146143  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:08.149893  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:08.149989  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:08.175887  435193 cri.go:89] found id: ""
	I1121 14:53:08.175912  435193 logs.go:282] 0 containers: []
	W1121 14:53:08.175920  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:08.175928  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:08.176004  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:08.232195  435193 cri.go:89] found id: ""
	I1121 14:53:08.232217  435193 logs.go:282] 0 containers: []
	W1121 14:53:08.232226  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:08.232251  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:08.232270  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:08.318216  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:08.318253  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:08.375270  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:08.375346  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:08.541483  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:08.541571  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:08.567395  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:08.567484  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:08.688557  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:08.688640  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:08.688670  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:08.750593  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:08.750675  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:08.853040  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:08.853122  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:11.396476  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:11.396857  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:11.396896  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:11.396949  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:11.445625  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:11.445692  435193 cri.go:89] found id: ""
	I1121 14:53:11.445713  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:11.445805  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:11.453071  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:11.453194  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:11.494810  435193 cri.go:89] found id: ""
	I1121 14:53:11.494884  435193 logs.go:282] 0 containers: []
	W1121 14:53:11.494912  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:11.494931  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:11.495036  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:11.540367  435193 cri.go:89] found id: ""
	I1121 14:53:11.540468  435193 logs.go:282] 0 containers: []
	W1121 14:53:11.540491  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:11.540513  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:11.540600  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:11.583922  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:11.583992  435193 cri.go:89] found id: ""
	I1121 14:53:11.584014  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:11.584104  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:11.588039  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:11.588175  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:11.640515  435193 cri.go:89] found id: ""
	I1121 14:53:11.640589  435193 logs.go:282] 0 containers: []
	W1121 14:53:11.640612  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:11.640642  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:11.640747  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:11.684219  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:11.684354  435193 cri.go:89] found id: ""
	I1121 14:53:11.684376  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:11.684494  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:11.688420  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:11.688550  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:11.740797  435193 cri.go:89] found id: ""
	I1121 14:53:11.740871  435193 logs.go:282] 0 containers: []
	W1121 14:53:11.740895  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:11.740913  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:11.741001  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:11.778725  435193 cri.go:89] found id: ""
	I1121 14:53:11.778810  435193 logs.go:282] 0 containers: []
	W1121 14:53:11.778839  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:11.778873  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:11.778902  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:11.831425  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:11.831502  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:11.933138  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:11.933227  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:11.987840  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:11.987866  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:12.135716  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:12.135798  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:12.170385  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:12.170411  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:12.276308  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:12.276392  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:12.276425  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:12.325150  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:12.325226  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:12.916493  451036 node_ready.go:49] node "pause-706190" is "Ready"
	I1121 14:53:12.916519  451036 node_ready.go:38] duration metric: took 7.104338332s for node "pause-706190" to be "Ready" ...
	I1121 14:53:12.916532  451036 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:53:12.916591  451036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:53:12.938782  451036 api_server.go:72] duration metric: took 7.275713505s to wait for apiserver process to appear ...
	I1121 14:53:12.938805  451036 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:53:12.938825  451036 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:53:13.012530  451036 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:53:13.012613  451036 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:53:13.438934  451036 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:53:13.447178  451036 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:53:13.447207  451036 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:53:13.939856  451036 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:53:13.948281  451036 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:53:13.949486  451036 api_server.go:141] control plane version: v1.34.1
	I1121 14:53:13.949514  451036 api_server.go:131] duration metric: took 1.010701403s to wait for apiserver health ...
	I1121 14:53:13.949523  451036 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:53:13.953094  451036 system_pods.go:59] 7 kube-system pods found
	I1121 14:53:13.953137  451036 system_pods.go:61] "coredns-66bc5c9577-gv42v" [32e6ea19-296d-433e-ab3e-7e992350c3c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:53:13.953147  451036 system_pods.go:61] "etcd-pause-706190" [6c5b228e-a0df-44af-a192-d6e4c28b067d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:53:13.953153  451036 system_pods.go:61] "kindnet-w45qn" [82b593e6-c11c-40d5-b942-033d29c7abd1] Running
	I1121 14:53:13.953160  451036 system_pods.go:61] "kube-apiserver-pause-706190" [3c8bd477-767c-40be-8fa7-b5edcf70b139] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:53:13.953172  451036 system_pods.go:61] "kube-controller-manager-pause-706190" [df7e94a8-f416-4abf-94d8-da9d0ff7efd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:53:13.953179  451036 system_pods.go:61] "kube-proxy-hzbpc" [1276e562-5617-4a13-af4d-f386a07e45d1] Running
	I1121 14:53:13.953186  451036 system_pods.go:61] "kube-scheduler-pause-706190" [ae855e82-1749-4071-b918-3df98ae0229d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:53:13.953200  451036 system_pods.go:74] duration metric: took 3.67011ms to wait for pod list to return data ...
	I1121 14:53:13.953210  451036 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:53:13.956170  451036 default_sa.go:45] found service account: "default"
	I1121 14:53:13.956196  451036 default_sa.go:55] duration metric: took 2.977548ms for default service account to be created ...
	I1121 14:53:13.956205  451036 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:53:13.959140  451036 system_pods.go:86] 7 kube-system pods found
	I1121 14:53:13.959182  451036 system_pods.go:89] "coredns-66bc5c9577-gv42v" [32e6ea19-296d-433e-ab3e-7e992350c3c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:53:13.959191  451036 system_pods.go:89] "etcd-pause-706190" [6c5b228e-a0df-44af-a192-d6e4c28b067d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:53:13.959196  451036 system_pods.go:89] "kindnet-w45qn" [82b593e6-c11c-40d5-b942-033d29c7abd1] Running
	I1121 14:53:13.959203  451036 system_pods.go:89] "kube-apiserver-pause-706190" [3c8bd477-767c-40be-8fa7-b5edcf70b139] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:53:13.959214  451036 system_pods.go:89] "kube-controller-manager-pause-706190" [df7e94a8-f416-4abf-94d8-da9d0ff7efd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:53:13.959221  451036 system_pods.go:89] "kube-proxy-hzbpc" [1276e562-5617-4a13-af4d-f386a07e45d1] Running
	I1121 14:53:13.959228  451036 system_pods.go:89] "kube-scheduler-pause-706190" [ae855e82-1749-4071-b918-3df98ae0229d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:53:13.959244  451036 system_pods.go:126] duration metric: took 3.032244ms to wait for k8s-apps to be running ...
	I1121 14:53:13.959253  451036 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:53:13.959313  451036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:53:13.972839  451036 system_svc.go:56] duration metric: took 13.574935ms WaitForService to wait for kubelet
	I1121 14:53:13.972868  451036 kubeadm.go:587] duration metric: took 8.30980582s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:53:13.972888  451036 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:53:13.975685  451036 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 14:53:13.975720  451036 node_conditions.go:123] node cpu capacity is 2
	I1121 14:53:13.975735  451036 node_conditions.go:105] duration metric: took 2.842088ms to run NodePressure ...
	I1121 14:53:13.975749  451036 start.go:242] waiting for startup goroutines ...
	I1121 14:53:13.975757  451036 start.go:247] waiting for cluster config update ...
	I1121 14:53:13.975768  451036 start.go:256] writing updated cluster config ...
	I1121 14:53:13.976107  451036 ssh_runner.go:195] Run: rm -f paused
	I1121 14:53:13.979743  451036 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:53:13.980515  451036 kapi.go:59] client config for pause-706190: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/client.crt", KeyFile:"/home/jenkins/minikube-integration/21847-289204/.minikube/profiles/pause-706190/client.key", CAFile:"/home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21278a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1121 14:53:13.983697  451036 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gv42v" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 14:53:15.989586  451036 pod_ready.go:104] pod "coredns-66bc5c9577-gv42v" is not "Ready", error: <nil>
	I1121 14:53:14.909821  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:14.910238  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:14.910283  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:14.910340  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:14.951414  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:14.951433  435193 cri.go:89] found id: ""
	I1121 14:53:14.951440  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:14.951495  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:14.955744  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:14.955812  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:14.993314  435193 cri.go:89] found id: ""
	I1121 14:53:14.993337  435193 logs.go:282] 0 containers: []
	W1121 14:53:14.993345  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:14.993352  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:14.993413  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:15.080525  435193 cri.go:89] found id: ""
	I1121 14:53:15.080549  435193 logs.go:282] 0 containers: []
	W1121 14:53:15.080557  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:15.080564  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:15.080627  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:15.112800  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:15.112824  435193 cri.go:89] found id: ""
	I1121 14:53:15.112833  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:15.112897  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:15.117349  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:15.117427  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:15.145759  435193 cri.go:89] found id: ""
	I1121 14:53:15.145788  435193 logs.go:282] 0 containers: []
	W1121 14:53:15.145797  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:15.145804  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:15.145864  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:15.177343  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:15.177368  435193 cri.go:89] found id: ""
	I1121 14:53:15.177376  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:15.177433  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:15.181530  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:15.181621  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:15.212067  435193 cri.go:89] found id: ""
	I1121 14:53:15.212089  435193 logs.go:282] 0 containers: []
	W1121 14:53:15.212097  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:15.212104  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:15.212166  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:15.246935  435193 cri.go:89] found id: ""
	I1121 14:53:15.246957  435193 logs.go:282] 0 containers: []
	W1121 14:53:15.246966  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:15.246977  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:15.246988  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:15.314616  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:15.314635  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:15.314653  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:15.360797  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:15.360870  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:15.427451  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:15.427489  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:15.464695  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:15.464729  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:15.541633  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:15.541757  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:15.582082  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:15.582194  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:15.709452  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:15.709531  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1121 14:53:17.989789  451036 pod_ready.go:104] pod "coredns-66bc5c9577-gv42v" is not "Ready", error: <nil>
	W1121 14:53:20.489484  451036 pod_ready.go:104] pod "coredns-66bc5c9577-gv42v" is not "Ready", error: <nil>
	I1121 14:53:20.988876  451036 pod_ready.go:94] pod "coredns-66bc5c9577-gv42v" is "Ready"
	I1121 14:53:20.988902  451036 pod_ready.go:86] duration metric: took 7.005176807s for pod "coredns-66bc5c9577-gv42v" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:20.991817  451036 pod_ready.go:83] waiting for pod "etcd-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:18.229218  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:18.229649  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:18.229693  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:18.229750  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:18.260502  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:18.260529  435193 cri.go:89] found id: ""
	I1121 14:53:18.260537  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:18.260604  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:18.264549  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:18.264627  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:18.290291  435193 cri.go:89] found id: ""
	I1121 14:53:18.290316  435193 logs.go:282] 0 containers: []
	W1121 14:53:18.290325  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:18.290332  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:18.290402  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:18.318155  435193 cri.go:89] found id: ""
	I1121 14:53:18.318178  435193 logs.go:282] 0 containers: []
	W1121 14:53:18.318187  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:18.318193  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:18.318262  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:18.345309  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:18.345331  435193 cri.go:89] found id: ""
	I1121 14:53:18.345340  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:18.345415  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:18.349188  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:18.349281  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:18.373964  435193 cri.go:89] found id: ""
	I1121 14:53:18.373988  435193 logs.go:282] 0 containers: []
	W1121 14:53:18.373996  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:18.374003  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:18.374060  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:18.400254  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:18.400277  435193 cri.go:89] found id: ""
	I1121 14:53:18.400286  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:18.400344  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:18.403994  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:18.404068  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:18.429936  435193 cri.go:89] found id: ""
	I1121 14:53:18.429961  435193 logs.go:282] 0 containers: []
	W1121 14:53:18.429970  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:18.429976  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:18.430038  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:18.456664  435193 cri.go:89] found id: ""
	I1121 14:53:18.456685  435193 logs.go:282] 0 containers: []
	W1121 14:53:18.456694  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:18.456702  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:18.456720  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:18.519237  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:18.519272  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:18.545007  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:18.545035  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:18.610627  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:18.610664  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:18.646322  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:18.646399  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:18.761407  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:18.761445  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:18.777947  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:18.777978  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:18.850910  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:18.850929  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:18.850943  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:21.384840  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:21.385277  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:21.385332  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:21.385388  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:21.414187  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:21.414211  435193 cri.go:89] found id: ""
	I1121 14:53:21.414219  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:21.414276  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:21.418222  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:21.418291  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:21.448017  435193 cri.go:89] found id: ""
	I1121 14:53:21.448042  435193 logs.go:282] 0 containers: []
	W1121 14:53:21.448051  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:21.448057  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:21.448121  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:21.476317  435193 cri.go:89] found id: ""
	I1121 14:53:21.476341  435193 logs.go:282] 0 containers: []
	W1121 14:53:21.476359  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:21.476365  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:21.476469  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:21.512058  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:21.512079  435193 cri.go:89] found id: ""
	I1121 14:53:21.512087  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:21.512142  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:21.516019  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:21.516090  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:21.546934  435193 cri.go:89] found id: ""
	I1121 14:53:21.546958  435193 logs.go:282] 0 containers: []
	W1121 14:53:21.546967  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:21.546974  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:21.547034  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:21.582288  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:21.582310  435193 cri.go:89] found id: ""
	I1121 14:53:21.582318  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:21.582375  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:21.587252  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:21.587345  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:21.619674  435193 cri.go:89] found id: ""
	I1121 14:53:21.619703  435193 logs.go:282] 0 containers: []
	W1121 14:53:21.619712  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:21.619719  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:21.619800  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:21.647576  435193 cri.go:89] found id: ""
	I1121 14:53:21.647599  435193 logs.go:282] 0 containers: []
	W1121 14:53:21.647607  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:21.647616  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:21.647655  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:21.766527  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:21.766563  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:21.783310  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:21.783392  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:21.851781  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:21.851800  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:21.852790  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:21.886330  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:21.886359  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:21.954691  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:21.954730  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:21.984024  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:21.984052  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:22.061539  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:22.061578  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1121 14:53:22.998176  451036 pod_ready.go:104] pod "etcd-pause-706190" is not "Ready", error: <nil>
	W1121 14:53:24.998262  451036 pod_ready.go:104] pod "etcd-pause-706190" is not "Ready", error: <nil>
	I1121 14:53:26.997175  451036 pod_ready.go:94] pod "etcd-pause-706190" is "Ready"
	I1121 14:53:26.997205  451036 pod_ready.go:86] duration metric: took 6.005361284s for pod "etcd-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:26.999660  451036 pod_ready.go:83] waiting for pod "kube-apiserver-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.007255  451036 pod_ready.go:94] pod "kube-apiserver-pause-706190" is "Ready"
	I1121 14:53:27.007302  451036 pod_ready.go:86] duration metric: took 7.613939ms for pod "kube-apiserver-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.010541  451036 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.016286  451036 pod_ready.go:94] pod "kube-controller-manager-pause-706190" is "Ready"
	I1121 14:53:27.016370  451036 pod_ready.go:86] duration metric: took 5.796546ms for pod "kube-controller-manager-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.019157  451036 pod_ready.go:83] waiting for pod "kube-proxy-hzbpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.195688  451036 pod_ready.go:94] pod "kube-proxy-hzbpc" is "Ready"
	I1121 14:53:27.195717  451036 pod_ready.go:86] duration metric: took 176.533051ms for pod "kube-proxy-hzbpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.396179  451036 pod_ready.go:83] waiting for pod "kube-scheduler-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.797434  451036 pod_ready.go:94] pod "kube-scheduler-pause-706190" is "Ready"
	I1121 14:53:27.797465  451036 pod_ready.go:86] duration metric: took 401.254895ms for pod "kube-scheduler-pause-706190" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:53:27.797479  451036 pod_ready.go:40] duration metric: took 13.817700305s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:53:27.878467  451036 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 14:53:27.881608  451036 out.go:179] * Done! kubectl is now configured to use "pause-706190" cluster and "default" namespace by default
	I1121 14:53:24.603458  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:24.603954  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:24.604017  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:24.604097  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:24.632094  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:24.632115  435193 cri.go:89] found id: ""
	I1121 14:53:24.632123  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:24.632186  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:24.635983  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:24.636060  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:24.664231  435193 cri.go:89] found id: ""
	I1121 14:53:24.664257  435193 logs.go:282] 0 containers: []
	W1121 14:53:24.664265  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:24.664272  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:24.664334  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:24.692142  435193 cri.go:89] found id: ""
	I1121 14:53:24.692166  435193 logs.go:282] 0 containers: []
	W1121 14:53:24.692177  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:24.692184  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:24.692252  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:24.721002  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:24.721025  435193 cri.go:89] found id: ""
	I1121 14:53:24.721034  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:24.721093  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:24.724974  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:24.725051  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:24.751805  435193 cri.go:89] found id: ""
	I1121 14:53:24.751831  435193 logs.go:282] 0 containers: []
	W1121 14:53:24.751840  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:24.751847  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:24.751937  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:24.778847  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:24.778870  435193 cri.go:89] found id: ""
	I1121 14:53:24.778878  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:24.778959  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:24.782839  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:24.782966  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:24.808571  435193 cri.go:89] found id: ""
	I1121 14:53:24.808651  435193 logs.go:282] 0 containers: []
	W1121 14:53:24.808680  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:24.808709  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:24.808775  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:24.836097  435193 cri.go:89] found id: ""
	I1121 14:53:24.836122  435193 logs.go:282] 0 containers: []
	W1121 14:53:24.836130  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:24.836140  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:24.836152  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:24.905301  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:24.905336  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:24.932299  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:24.932377  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:24.992486  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:24.992524  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:25.029458  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:25.029493  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:25.162229  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:25.162278  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:25.179973  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:25.180005  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:25.253766  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:25.253786  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:25.253802  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:27.789303  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:27.789769  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:27.789832  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:27.789902  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:27.832225  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:27.832248  435193 cri.go:89] found id: ""
	I1121 14:53:27.832257  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:27.832317  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:27.836793  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:27.836871  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:27.869721  435193 cri.go:89] found id: ""
	I1121 14:53:27.869747  435193 logs.go:282] 0 containers: []
	W1121 14:53:27.869756  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:27.869763  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:27.869821  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:27.946258  435193 cri.go:89] found id: ""
	I1121 14:53:27.946286  435193 logs.go:282] 0 containers: []
	W1121 14:53:27.946295  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:27.946302  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:27.946375  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:27.978748  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:27.978771  435193 cri.go:89] found id: ""
	I1121 14:53:27.978779  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:27.978832  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:27.985644  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:27.985714  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:28.027180  435193 cri.go:89] found id: ""
	I1121 14:53:28.027208  435193 logs.go:282] 0 containers: []
	W1121 14:53:28.027217  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:28.027224  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:28.027284  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:28.070399  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:28.070418  435193 cri.go:89] found id: ""
	I1121 14:53:28.070426  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:28.070483  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:28.074849  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:28.074918  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:28.105114  435193 cri.go:89] found id: ""
	I1121 14:53:28.105136  435193 logs.go:282] 0 containers: []
	W1121 14:53:28.105144  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:28.105151  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:28.105226  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:28.136494  435193 cri.go:89] found id: ""
	I1121 14:53:28.136516  435193 logs.go:282] 0 containers: []
	W1121 14:53:28.136524  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:28.136533  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:28.136545  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:53:28.194686  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:28.194752  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:28.346902  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:28.346978  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:28.373448  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:28.373534  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:28.471509  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:28.471573  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:28.471601  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:28.513568  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:28.513603  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:28.579891  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:28.579923  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:28.612225  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:28.612250  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:31.185450  435193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:53:31.185978  435193 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:53:31.186031  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:53:31.186090  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:53:31.231357  435193 cri.go:89] found id: "6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:31.231377  435193 cri.go:89] found id: ""
	I1121 14:53:31.231385  435193 logs.go:282] 1 containers: [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e]
	I1121 14:53:31.231448  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:31.237462  435193 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:53:31.237532  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:53:31.274328  435193 cri.go:89] found id: ""
	I1121 14:53:31.274344  435193 logs.go:282] 0 containers: []
	W1121 14:53:31.274352  435193 logs.go:284] No container was found matching "etcd"
	I1121 14:53:31.274358  435193 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:53:31.274415  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:53:31.302765  435193 cri.go:89] found id: ""
	I1121 14:53:31.302792  435193 logs.go:282] 0 containers: []
	W1121 14:53:31.302801  435193 logs.go:284] No container was found matching "coredns"
	I1121 14:53:31.302808  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:53:31.302870  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:53:31.339421  435193 cri.go:89] found id: "4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:31.339447  435193 cri.go:89] found id: ""
	I1121 14:53:31.339455  435193 logs.go:282] 1 containers: [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec]
	I1121 14:53:31.339514  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:31.344322  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:53:31.344413  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:53:31.383428  435193 cri.go:89] found id: ""
	I1121 14:53:31.383456  435193 logs.go:282] 0 containers: []
	W1121 14:53:31.383465  435193 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:53:31.383472  435193 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:53:31.383536  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:53:31.422936  435193 cri.go:89] found id: "743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:31.422962  435193 cri.go:89] found id: ""
	I1121 14:53:31.422970  435193 logs.go:282] 1 containers: [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49]
	I1121 14:53:31.423041  435193 ssh_runner.go:195] Run: which crictl
	I1121 14:53:31.428289  435193 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:53:31.428356  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:53:31.462944  435193 cri.go:89] found id: ""
	I1121 14:53:31.462970  435193 logs.go:282] 0 containers: []
	W1121 14:53:31.462980  435193 logs.go:284] No container was found matching "kindnet"
	I1121 14:53:31.462987  435193 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:53:31.463050  435193 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:53:31.497785  435193 cri.go:89] found id: ""
	I1121 14:53:31.497811  435193 logs.go:282] 0 containers: []
	W1121 14:53:31.497820  435193 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:53:31.497829  435193 logs.go:123] Gathering logs for kubelet ...
	I1121 14:53:31.497872  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:53:31.637494  435193 logs.go:123] Gathering logs for dmesg ...
	I1121 14:53:31.637528  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:53:31.657630  435193 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:53:31.657709  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:53:31.763409  435193 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:53:31.763430  435193 logs.go:123] Gathering logs for kube-apiserver [6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e] ...
	I1121 14:53:31.763444  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a590be6f625f700a133ea66349740f1fad1afb6319b3f5fefcd9e010a5e6f5e"
	I1121 14:53:31.817575  435193 logs.go:123] Gathering logs for kube-scheduler [4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec] ...
	I1121 14:53:31.817651  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4430901aeb2e34726f1ddb57a3851828a846d6957699cbe1e5849afe457680ec"
	I1121 14:53:31.887825  435193 logs.go:123] Gathering logs for kube-controller-manager [743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49] ...
	I1121 14:53:31.887858  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 743305475b3274468183e6a979c9ed4bef41eb4203addfed282fe34ec2e99b49"
	I1121 14:53:31.923781  435193 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:53:31.923827  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:53:31.995566  435193 logs.go:123] Gathering logs for container status ...
	I1121 14:53:31.995605  435193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.58989127Z" level=info msg="Creating container: kube-system/kube-proxy-hzbpc/kube-proxy" id=8f796da3-b3b8-4aa0-98d2-bf6f4c47ad2a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.590173858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.600981628Z" level=info msg="Created container b4215710d430f809041f5bba4f80d28ce0164af530df92c483f86efe43316256: kube-system/kube-apiserver-pause-706190/kube-apiserver" id=956f70ff-a76b-4bb7-a244-e50032aa4625 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.605564218Z" level=info msg="Starting container: b4215710d430f809041f5bba4f80d28ce0164af530df92c483f86efe43316256" id=2b84813a-7777-40db-b19f-f43eafebdc68 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.615282578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.616126807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.626880118Z" level=info msg="Started container" PID=2379 containerID=b4215710d430f809041f5bba4f80d28ce0164af530df92c483f86efe43316256 description=kube-system/kube-apiserver-pause-706190/kube-apiserver id=2b84813a-7777-40db-b19f-f43eafebdc68 name=/runtime.v1.RuntimeService/StartContainer sandboxID=36f556e38e05b9edcb1cc64f5a0c1af7e311f4715802d21f952101daeffbfdac
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.684358903Z" level=info msg="Created container 8fb4ee525b2491a835969d0c178891f19502424e426454f40c716c5bbbeacfab: kube-system/kindnet-w45qn/kindnet-cni" id=3a62500c-d87c-4fdd-86f7-422dca43faac name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.693124184Z" level=info msg="Starting container: 8fb4ee525b2491a835969d0c178891f19502424e426454f40c716c5bbbeacfab" id=fb88e49b-9b35-4bc3-a090-d74fae1b6f02 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:53:08 pause-706190 crio[2068]: time="2025-11-21T14:53:08.694969804Z" level=info msg="Started container" PID=2391 containerID=8fb4ee525b2491a835969d0c178891f19502424e426454f40c716c5bbbeacfab description=kube-system/kindnet-w45qn/kindnet-cni id=fb88e49b-9b35-4bc3-a090-d74fae1b6f02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be13f76cefe1e3ba107c3adeae0bfb28b6e4dacdcd83e8cd7cac8fbf08008e53
	Nov 21 14:53:09 pause-706190 crio[2068]: time="2025-11-21T14:53:09.104251814Z" level=info msg="Created container 7fd1af596bd962578d4345a6db324de4fb359033a06c21a89e10c5562cf0406c: kube-system/kube-proxy-hzbpc/kube-proxy" id=8f796da3-b3b8-4aa0-98d2-bf6f4c47ad2a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:53:09 pause-706190 crio[2068]: time="2025-11-21T14:53:09.105117212Z" level=info msg="Starting container: 7fd1af596bd962578d4345a6db324de4fb359033a06c21a89e10c5562cf0406c" id=7d2c5e16-4f65-47a3-a8a4-12c707ff60ac name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:53:09 pause-706190 crio[2068]: time="2025-11-21T14:53:09.107640174Z" level=info msg="Started container" PID=2404 containerID=7fd1af596bd962578d4345a6db324de4fb359033a06c21a89e10c5562cf0406c description=kube-system/kube-proxy-hzbpc/kube-proxy id=7d2c5e16-4f65-47a3-a8a4-12c707ff60ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=29237e21d7eb16448f00207288f942e2be7278f2f821198bf3e415aa4f5e04cb
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.036972516Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.041087424Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.041122518Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.041144697Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.044192178Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.044239867Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.044264212Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.047347582Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.047382036Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.047405577Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.050389787Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:53:19 pause-706190 crio[2068]: time="2025-11-21T14:53:19.05042411Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	7fd1af596bd96       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   25 seconds ago       Running             kube-proxy                1                   29237e21d7eb1       kube-proxy-hzbpc                       kube-system
	8fb4ee525b249       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   25 seconds ago       Running             kindnet-cni               1                   be13f76cefe1e       kindnet-w45qn                          kube-system
	b4215710d430f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   25 seconds ago       Running             kube-apiserver            1                   36f556e38e05b       kube-apiserver-pause-706190            kube-system
	bfb73c516ae0a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   25 seconds ago       Running             kube-controller-manager   1                   4e42d008df7b1       kube-controller-manager-pause-706190   kube-system
	fbdd6d6086f23       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   25 seconds ago       Running             kube-scheduler            1                   97fa3155b7e63       kube-scheduler-pause-706190            kube-system
	e9789f6445316       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   25 seconds ago       Running             etcd                      1                   e0d356b2681af       etcd-pause-706190                      kube-system
	0271386e341fa       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   25 seconds ago       Running             coredns                   1                   ff76054a7fb26       coredns-66bc5c9577-gv42v               kube-system
	040f84d32cdc7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   40 seconds ago       Exited              coredns                   0                   ff76054a7fb26       coredns-66bc5c9577-gv42v               kube-system
	45d1eb07971cc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   29237e21d7eb1       kube-proxy-hzbpc                       kube-system
	8b6e9299e5d66       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   be13f76cefe1e       kindnet-w45qn                          kube-system
	c933d49a5b407       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   97fa3155b7e63       kube-scheduler-pause-706190            kube-system
	6812ac6759a64       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   e0d356b2681af       etcd-pause-706190                      kube-system
	197f8208783cc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   4e42d008df7b1       kube-controller-manager-pause-706190   kube-system
	850ff406c6df8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   36f556e38e05b       kube-apiserver-pause-706190            kube-system
	
	
	==> coredns [0271386e341fa99a2b463a6333f9aca47fc2de7c5cb39e67ebb2fccf8ffa1a5e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55964 - 625 "HINFO IN 3085672885384044256.6973577108607144418. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033646716s
	
	
	==> coredns [040f84d32cdc7b868103d9e8b5e9e17971b6d790e17758c33a035160a39e7d02] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41994 - 36070 "HINFO IN 275753412209615345.3016881851810483464. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024853678s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-706190
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-706190
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=pause-706190
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_52_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:52:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-706190
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:53:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:52:52 +0000   Fri, 21 Nov 2025 14:51:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:52:52 +0000   Fri, 21 Nov 2025 14:51:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:52:52 +0000   Fri, 21 Nov 2025 14:51:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:52:52 +0000   Fri, 21 Nov 2025 14:52:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-706190
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                aa04f144-e699-4e79-bbe3-e08f8d8ad6bb
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gv42v                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     82s
	  kube-system                 etcd-pause-706190                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         87s
	  kube-system                 kindnet-w45qn                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      82s
	  kube-system                 kube-apiserver-pause-706190             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-pause-706190    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-hzbpc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-scheduler-pause-706190             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 80s                kube-proxy       
	  Normal   Starting                 20s                kube-proxy       
	  Warning  CgroupV1                 95s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  94s (x8 over 95s)  kubelet          Node pause-706190 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    94s (x8 over 95s)  kubelet          Node pause-706190 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     94s (x8 over 95s)  kubelet          Node pause-706190 status is now: NodeHasSufficientPID
	  Normal   Starting                 88s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 88s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  87s                kubelet          Node pause-706190 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    87s                kubelet          Node pause-706190 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     87s                kubelet          Node pause-706190 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           83s                node-controller  Node pause-706190 event: Registered Node pause-706190 in Controller
	  Normal   NodeReady                41s                kubelet          Node pause-706190 status is now: NodeReady
	  Warning  ContainerGCFailed        28s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           17s                node-controller  Node pause-706190 event: Registered Node pause-706190 in Controller
	
	
	==> dmesg <==
	[Nov21 14:24] overlayfs: idmapped layers are currently not supported
	[Nov21 14:25] overlayfs: idmapped layers are currently not supported
	[  +3.881075] overlayfs: idmapped layers are currently not supported
	[Nov21 14:26] overlayfs: idmapped layers are currently not supported
	[Nov21 14:27] overlayfs: idmapped layers are currently not supported
	[Nov21 14:29] overlayfs: idmapped layers are currently not supported
	[Nov21 14:33] kauditd_printk_skb: 8 callbacks suppressed
	[ +39.333625] overlayfs: idmapped layers are currently not supported
	[Nov21 14:34] overlayfs: idmapped layers are currently not supported
	[Nov21 14:35] overlayfs: idmapped layers are currently not supported
	[Nov21 14:36] overlayfs: idmapped layers are currently not supported
	[Nov21 14:37] overlayfs: idmapped layers are currently not supported
	[Nov21 14:39] overlayfs: idmapped layers are currently not supported
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6812ac6759a64275994e5d4179d3b1c59a354178a5c457f989b66dddfd9abce0] <==
	{"level":"warn","ts":"2025-11-21T14:52:02.238165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:52:02.292589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:52:02.341376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:52:02.368677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:52:02.415969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:52:02.464479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:52:02.557885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39208","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:52:57.365731Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-21T14:52:57.365790Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-706190","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-21T14:52:57.365878Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-21T14:52:57.501910Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-21T14:52:57.501988Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T14:52:57.502028Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-21T14:52:57.502054Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-21T14:52:57.502057Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-21T14:52:57.502186Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-21T14:52:57.502212Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-21T14:52:57.502220Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-21T14:52:57.502254Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-21T14:52:57.502268Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-21T14:52:57.502275Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T14:52:57.505370Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-21T14:52:57.505449Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T14:52:57.505489Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-21T14:52:57.505505Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-706190","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [e9789f6445316da4ede3f914ea50a7bb3356c7e4b2a8fc78aef346401a3881ae] <==
	{"level":"warn","ts":"2025-11-21T14:53:10.738852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.771033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.797012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.825481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.847157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.878705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.916754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.939069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.964043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:10.993365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.018936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.049381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.076477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.102585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.130213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.161458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.200484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.266477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.295523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.324524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.339202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.389776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.480941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.483617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:53:11.648755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56470","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:53:33 up  2:36,  0 user,  load average: 2.04, 2.42, 2.17
	Linux pause-706190 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8b6e9299e5d66efba1babf3908bb853e3ef2453315bc7a675c28ebfadd857a0b] <==
	I1121 14:52:12.112590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:52:12.112855       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:52:12.112978       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:52:12.112996       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:52:12.113010       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:52:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:52:12.401909       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:52:12.402090       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:52:12.402142       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:52:12.403137       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:52:42.402650       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 14:52:42.402895       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 14:52:42.403046       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 14:52:42.403101       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1121 14:52:43.903040       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:52:43.903068       1 metrics.go:72] Registering metrics
	I1121 14:52:43.903145       1 controller.go:711] "Syncing nftables rules"
	I1121 14:52:52.403414       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:52:52.403502       1 main.go:301] handling current node
	
	
	==> kindnet [8fb4ee525b2491a835969d0c178891f19502424e426454f40c716c5bbbeacfab] <==
	I1121 14:53:08.834524       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:53:08.834919       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:53:08.835242       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:53:08.846365       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:53:08.846438       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:53:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:53:09.039619       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:53:09.039738       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:53:09.039775       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:53:09.040802       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:53:13.040891       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:53:13.040924       1 metrics.go:72] Registering metrics
	I1121 14:53:13.040991       1 controller.go:711] "Syncing nftables rules"
	I1121 14:53:19.036535       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:53:19.036616       1 main.go:301] handling current node
	I1121 14:53:29.037890       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:53:29.037924       1 main.go:301] handling current node
	
	
	==> kube-apiserver [850ff406c6df8f8893b6ab5c9796026832713346c5d66cfb49b7adfbe435e36e] <==
	W1121 14:52:57.376807       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.376820       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.376868       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.376916       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.376963       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377005       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377049       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377091       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377135       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377216       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.374208       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.374895       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377911       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377961       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.377993       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378034       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378055       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378077       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378102       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378123       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378150       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378174       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378197       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378219       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1121 14:52:57.378239       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b4215710d430f809041f5bba4f80d28ce0164af530df92c483f86efe43316256] <==
	I1121 14:53:12.980737       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:53:13.008697       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1121 14:53:13.018468       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:53:13.018510       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 14:53:13.027741       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1121 14:53:13.027817       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 14:53:13.027827       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 14:53:13.027947       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1121 14:53:13.032923       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1121 14:53:13.033152       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1121 14:53:13.033195       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:53:13.035652       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:53:13.037020       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:53:13.037555       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1121 14:53:13.038207       1 aggregator.go:171] initial CRD sync complete...
	I1121 14:53:13.038249       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:53:13.038257       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:53:13.038264       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:53:13.040360       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1121 14:53:13.710807       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:53:14.884180       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:53:16.282816       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:53:16.526050       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:53:16.577374       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:53:16.676233       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [197f8208783cc8b8d66bcaabe4dafe985f92bc2bb1c5c712bf1bd3332e0271f2] <==
	I1121 14:52:10.349399       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:52:10.349435       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:52:10.349441       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:52:10.349446       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:52:10.351896       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:52:10.364434       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:52:10.364377       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:52:10.369446       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 14:52:10.369489       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:52:10.369572       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 14:52:10.369656       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 14:52:10.369454       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 14:52:10.374559       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-706190" podCIDRs=["10.244.0.0/24"]
	I1121 14:52:10.376353       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:52:10.379693       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:52:10.388492       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:52:10.390609       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:52:10.390709       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:52:10.390613       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:52:10.390656       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:52:10.390626       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:52:10.391932       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:52:10.395332       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:52:10.395421       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:52:55.352662       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [bfb73c516ae0adef4c04d9b39aacb5990ae73b4cfa9b6fd7f5696465b6a4b222] <==
	I1121 14:53:16.268786       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:53:16.268826       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:53:16.268869       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:53:16.270866       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:53:16.272004       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:53:16.274640       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:53:16.274700       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:53:16.276840       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:53:16.289201       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:53:16.304832       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:53:16.308949       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:53:16.311734       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:53:16.313485       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:53:16.317840       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:53:16.317857       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:53:16.319083       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:53:16.319090       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:53:16.320243       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 14:53:16.320347       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 14:53:16.320545       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-706190"
	I1121 14:53:16.320595       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1121 14:53:16.324494       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:53:16.327198       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:53:16.330935       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:53:16.333211       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [45d1eb07971cca34df63cc22e950e71d40b97c9098663f5e60130d5a971a5bdc] <==
	I1121 14:52:12.773152       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:52:12.857908       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:52:12.958945       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:52:12.958979       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:52:12.959062       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:52:12.978360       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:52:12.978415       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:52:12.982435       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:52:12.982736       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:52:12.982804       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:52:12.986552       1 config.go:200] "Starting service config controller"
	I1121 14:52:12.986630       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:52:12.987683       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:52:12.987770       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:52:12.987812       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:52:12.987838       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:52:12.992138       1 config.go:309] "Starting node config controller"
	I1121 14:52:12.992223       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:52:12.992257       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:52:13.088136       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:52:13.088346       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:52:13.088801       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [7fd1af596bd962578d4345a6db324de4fb359033a06c21a89e10c5562cf0406c] <==
	I1121 14:53:11.482586       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:53:12.977444       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:53:13.078412       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:53:13.078505       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:53:13.078603       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:53:13.227715       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:53:13.227828       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:53:13.245627       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:53:13.245967       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:53:13.245993       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:53:13.247112       1 config.go:200] "Starting service config controller"
	I1121 14:53:13.247179       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:53:13.257158       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:53:13.257245       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:53:13.257289       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:53:13.257334       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:53:13.260744       1 config.go:309] "Starting node config controller"
	I1121 14:53:13.260831       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:53:13.260863       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:53:13.347959       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:53:13.358355       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:53:13.358361       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c933d49a5b407943549354e3a9e5fbb544091961370b604289633563c7439472] <==
	E1121 14:52:04.225545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:52:04.225635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:52:04.225741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:52:04.225816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:52:04.225931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:52:04.228182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:52:04.228548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:52:04.228683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:52:04.228832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:52:04.228948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 14:52:04.229177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:52:04.229262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:52:04.234246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:52:04.234401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:52:04.234509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:52:04.234598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:52:04.234730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:52:04.234916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1121 14:52:05.405423       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:52:57.353218       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1121 14:52:57.353244       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1121 14:52:57.353264       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1121 14:52:57.353301       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:52:57.353478       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1121 14:52:57.353494       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fbdd6d6086f23497b1695b35d4843d7323a4ff9df5621baeda893ea64d511a23] <==
	I1121 14:53:11.207299       1 serving.go:386] Generated self-signed cert in-memory
	I1121 14:53:13.255338       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:53:13.255370       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:53:13.266143       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 14:53:13.266248       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 14:53:13.266320       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:53:13.266349       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:53:13.266410       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:53:13.266437       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:53:13.266556       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:53:13.266623       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:53:13.367322       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 14:53:13.367439       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:53:13.368329       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.349107    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ca5f2bab7f47ea2d6582c273e4cdc251" pod="kube-system/kube-scheduler-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.349427    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gv42v\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="32e6ea19-296d-433e-ab3e-7e992350c3c2" pod="kube-system/coredns-66bc5c9577-gv42v"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.349756    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="48e6e2eed4477e7c5ce26e1c1c6d3548" pod="kube-system/kube-apiserver-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: I1121 14:53:08.466797    1311 scope.go:117] "RemoveContainer" containerID="8b6e9299e5d66efba1babf3908bb853e3ef2453315bc7a675c28ebfadd857a0b"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.470586    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-w45qn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="82b593e6-c11c-40d5-b942-033d29c7abd1" pod="kube-system/kindnet-w45qn"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.470988    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gv42v\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="32e6ea19-296d-433e-ab3e-7e992350c3c2" pod="kube-system/coredns-66bc5c9577-gv42v"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.471373    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="48e6e2eed4477e7c5ce26e1c1c6d3548" pod="kube-system/kube-apiserver-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.471691    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c15f6a2cc4915b5487781749323e86ff" pod="kube-system/etcd-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.471986    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3207d4d2d11971cbf6bbc243763a30e6" pod="kube-system/kube-controller-manager-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.472534    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ca5f2bab7f47ea2d6582c273e4cdc251" pod="kube-system/kube-scheduler-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: I1121 14:53:08.531899    1311 scope.go:117] "RemoveContainer" containerID="45d1eb07971cca34df63cc22e950e71d40b97c9098663f5e60130d5a971a5bdc"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.543536    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="48e6e2eed4477e7c5ce26e1c1c6d3548" pod="kube-system/kube-apiserver-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.558320    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c15f6a2cc4915b5487781749323e86ff" pod="kube-system/etcd-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.558665    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3207d4d2d11971cbf6bbc243763a30e6" pod="kube-system/kube-controller-manager-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.558855    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-706190\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ca5f2bab7f47ea2d6582c273e4cdc251" pod="kube-system/kube-scheduler-pause-706190"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.559026    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzbpc\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1276e562-5617-4a13-af4d-f386a07e45d1" pod="kube-system/kube-proxy-hzbpc"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.559189    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-w45qn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="82b593e6-c11c-40d5-b942-033d29c7abd1" pod="kube-system/kindnet-w45qn"
	Nov 21 14:53:08 pause-706190 kubelet[1311]: E1121 14:53:08.559374    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gv42v\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="32e6ea19-296d-433e-ab3e-7e992350c3c2" pod="kube-system/coredns-66bc5c9577-gv42v"
	Nov 21 14:53:12 pause-706190 kubelet[1311]: E1121 14:53:12.849805    1311 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-706190\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-706190' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 21 14:53:12 pause-706190 kubelet[1311]: E1121 14:53:12.850408    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-706190\" is forbidden: User \"system:node:pause-706190\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-706190' and this object" podUID="48e6e2eed4477e7c5ce26e1c1c6d3548" pod="kube-system/kube-apiserver-pause-706190"
	Nov 21 14:53:12 pause-706190 kubelet[1311]: E1121 14:53:12.915901    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-706190\" is forbidden: User \"system:node:pause-706190\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-706190' and this object" podUID="c15f6a2cc4915b5487781749323e86ff" pod="kube-system/etcd-pause-706190"
	Nov 21 14:53:16 pause-706190 kubelet[1311]: W1121 14:53:16.238048    1311 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 21 14:53:28 pause-706190 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:53:28 pause-706190 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:53:28 pause-706190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-706190 -n pause-706190
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-706190 -n pause-706190: exit status 2 (363.279879ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-706190 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-357479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-357479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (263.885155ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:57:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-357479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-357479 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-357479 describe deploy/metrics-server -n kube-system: exit status 1 (78.419875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-357479 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-357479
helpers_test.go:243: (dbg) docker inspect old-k8s-version-357479:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19",
	        "Created": "2025-11-21T14:56:00.807071627Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 467987,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:56:00.87459268Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/hostname",
	        "HostsPath": "/var/lib/docker/containers/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/hosts",
	        "LogPath": "/var/lib/docker/containers/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19-json.log",
	        "Name": "/old-k8s-version-357479",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-357479:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-357479",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19",
	                "LowerDir": "/var/lib/docker/overlay2/4b4d0ac394452156ec3837780780e8daabe7e0050a0fe74add3a28c8e62b67e7-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b4d0ac394452156ec3837780780e8daabe7e0050a0fe74add3a28c8e62b67e7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b4d0ac394452156ec3837780780e8daabe7e0050a0fe74add3a28c8e62b67e7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b4d0ac394452156ec3837780780e8daabe7e0050a0fe74add3a28c8e62b67e7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-357479",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-357479/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-357479",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-357479",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-357479",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "87f64dc6c7df1fd5249dda75a82d40838b9f95954f0e5be37d9633488e213d5f",
	            "SandboxKey": "/var/run/docker/netns/87f64dc6c7df",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-357479": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:f2:ac:20:0d:bd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7422cf90b3020c38636ee057796200a23d8dcb6121ac9112e0ea63b06e8fa49d",
	                    "EndpointID": "f29ab050462f04623ae6c31e1b45fd3b4811659ac1aefcce350c1574fa282539",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-357479",
	                        "0fe519ab5875"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-357479 -n old-k8s-version-357479
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-357479 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-357479 logs -n 25: (1.212283705s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-609503 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo docker system info                                                                                                                                                                                                      │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo containerd config dump                                                                                                                                                                                                  │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo crio config                                                                                                                                                                                                             │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ delete  │ -p cilium-609503                                                                                                                                                                                                                              │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │ 21 Nov 25 14:54 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-304879   │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │ 21 Nov 25 14:55 UTC │
	│ delete  │ -p force-systemd-env-360486                                                                                                                                                                                                                   │ force-systemd-env-360486 │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ start   │ -p cert-options-605096 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-605096      │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ ssh     │ cert-options-605096 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-605096      │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ ssh     │ -p cert-options-605096 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-605096      │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ delete  │ -p cert-options-605096                                                                                                                                                                                                                        │ cert-options-605096      │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-357479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:55:54
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:55:54.743844  467525 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:55:54.744045  467525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:55:54.744072  467525 out.go:374] Setting ErrFile to fd 2...
	I1121 14:55:54.744100  467525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:55:54.744917  467525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:55:54.745473  467525 out.go:368] Setting JSON to false
	I1121 14:55:54.746429  467525 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9507,"bootTime":1763727448,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 14:55:54.746510  467525 start.go:143] virtualization:  
	I1121 14:55:54.749959  467525 out.go:179] * [old-k8s-version-357479] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:55:54.754173  467525 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:55:54.754267  467525 notify.go:221] Checking for updates...
	I1121 14:55:54.757575  467525 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:55:54.760967  467525 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:55:54.763908  467525 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 14:55:54.766926  467525 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:55:54.769824  467525 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:55:54.773157  467525 config.go:182] Loaded profile config "cert-expiration-304879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:55:54.773272  467525 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:55:54.817942  467525 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:55:54.818084  467525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:55:54.880811  467525 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:55:54.870333838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:55:54.880923  467525 docker.go:319] overlay module found
	I1121 14:55:54.884080  467525 out.go:179] * Using the docker driver based on user configuration
	I1121 14:55:54.887024  467525 start.go:309] selected driver: docker
	I1121 14:55:54.887044  467525 start.go:930] validating driver "docker" against <nil>
	I1121 14:55:54.887060  467525 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:55:54.888121  467525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:55:54.953524  467525 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:55:54.943676109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:55:54.953688  467525 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:55:54.953944  467525 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:55:54.956912  467525 out.go:179] * Using Docker driver with root privileges
	I1121 14:55:54.959837  467525 cni.go:84] Creating CNI manager for ""
	I1121 14:55:54.959911  467525 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:55:54.959928  467525 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:55:54.960015  467525 start.go:353] cluster config:
	{Name:old-k8s-version-357479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-357479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:55:54.963141  467525 out.go:179] * Starting "old-k8s-version-357479" primary control-plane node in "old-k8s-version-357479" cluster
	I1121 14:55:54.966153  467525 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:55:54.969146  467525 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:55:54.972005  467525 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1121 14:55:54.972055  467525 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1121 14:55:54.972065  467525 cache.go:65] Caching tarball of preloaded images
	I1121 14:55:54.972097  467525 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:55:54.972154  467525 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 14:55:54.972165  467525 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1121 14:55:54.972277  467525 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/config.json ...
	I1121 14:55:54.972300  467525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/config.json: {Name:mk181a801c732d3be2bc830130907a2b64db9a59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:55:54.992537  467525 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:55:54.992566  467525 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:55:54.992586  467525 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:55:54.992610  467525 start.go:360] acquireMachinesLock for old-k8s-version-357479: {Name:mkee659a8f6abec9bb7dae4fcd9dfcbc91c829e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:55:54.992724  467525 start.go:364] duration metric: took 93.277µs to acquireMachinesLock for "old-k8s-version-357479"
	I1121 14:55:54.992755  467525 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-357479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-357479 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:55:54.992838  467525 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:55:54.996679  467525 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:55:54.996961  467525 start.go:159] libmachine.API.Create for "old-k8s-version-357479" (driver="docker")
	I1121 14:55:54.997195  467525 client.go:173] LocalClient.Create starting
	I1121 14:55:54.997285  467525 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem
	I1121 14:55:54.997331  467525 main.go:143] libmachine: Decoding PEM data...
	I1121 14:55:54.997349  467525 main.go:143] libmachine: Parsing certificate...
	I1121 14:55:54.997415  467525 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem
	I1121 14:55:54.997441  467525 main.go:143] libmachine: Decoding PEM data...
	I1121 14:55:54.997453  467525 main.go:143] libmachine: Parsing certificate...
	I1121 14:55:54.997924  467525 cli_runner.go:164] Run: docker network inspect old-k8s-version-357479 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:55:55.021401  467525 cli_runner.go:211] docker network inspect old-k8s-version-357479 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:55:55.021523  467525 network_create.go:284] running [docker network inspect old-k8s-version-357479] to gather additional debugging logs...
	I1121 14:55:55.021545  467525 cli_runner.go:164] Run: docker network inspect old-k8s-version-357479
	W1121 14:55:55.042250  467525 cli_runner.go:211] docker network inspect old-k8s-version-357479 returned with exit code 1
	I1121 14:55:55.042282  467525 network_create.go:287] error running [docker network inspect old-k8s-version-357479]: docker network inspect old-k8s-version-357479: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-357479 not found
	I1121 14:55:55.042299  467525 network_create.go:289] output of [docker network inspect old-k8s-version-357479]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-357479 not found
	
	** /stderr **
	I1121 14:55:55.042413  467525 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:55:55.060855  467525 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-82d3b8bc8a36 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:46:f3:82:e8:95} reservation:<nil>}
	I1121 14:55:55.061326  467525 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-741c868a6917 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:04:b7:a7:98:dc} reservation:<nil>}
	I1121 14:55:55.061588  467525 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-047a1ecabae6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:eb:03:dd:6a:cd} reservation:<nil>}
	I1121 14:55:55.062025  467525 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019acca0}
	I1121 14:55:55.062052  467525 network_create.go:124] attempt to create docker network old-k8s-version-357479 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1121 14:55:55.062124  467525 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-357479 old-k8s-version-357479
	I1121 14:55:55.139591  467525 network_create.go:108] docker network old-k8s-version-357479 192.168.76.0/24 created
	I1121 14:55:55.139623  467525 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-357479" container
	I1121 14:55:55.139723  467525 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:55:55.157343  467525 cli_runner.go:164] Run: docker volume create old-k8s-version-357479 --label name.minikube.sigs.k8s.io=old-k8s-version-357479 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:55:55.177376  467525 oci.go:103] Successfully created a docker volume old-k8s-version-357479
	I1121 14:55:55.177482  467525 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-357479-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-357479 --entrypoint /usr/bin/test -v old-k8s-version-357479:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:55:55.696889  467525 oci.go:107] Successfully prepared a docker volume old-k8s-version-357479
	I1121 14:55:55.696971  467525 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1121 14:55:55.696988  467525 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:55:55.697067  467525 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-357479:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:56:00.724523  467525 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-357479:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.027393722s)
	I1121 14:56:00.724558  467525 kic.go:203] duration metric: took 5.027566171s to extract preloaded images to volume ...
	W1121 14:56:00.724697  467525 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 14:56:00.724813  467525 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:56:00.782714  467525 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-357479 --name old-k8s-version-357479 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-357479 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-357479 --network old-k8s-version-357479 --ip 192.168.76.2 --volume old-k8s-version-357479:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:56:01.118622  467525 cli_runner.go:164] Run: docker container inspect old-k8s-version-357479 --format={{.State.Running}}
	I1121 14:56:01.144215  467525 cli_runner.go:164] Run: docker container inspect old-k8s-version-357479 --format={{.State.Status}}
	I1121 14:56:01.172337  467525 cli_runner.go:164] Run: docker exec old-k8s-version-357479 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:56:01.230826  467525 oci.go:144] the created container "old-k8s-version-357479" has a running status.
	I1121 14:56:01.230866  467525 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/old-k8s-version-357479/id_rsa...
	I1121 14:56:01.388190  467525 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-289204/.minikube/machines/old-k8s-version-357479/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:56:01.416547  467525 cli_runner.go:164] Run: docker container inspect old-k8s-version-357479 --format={{.State.Status}}
	I1121 14:56:01.442654  467525 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:56:01.442677  467525 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-357479 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:56:01.509580  467525 cli_runner.go:164] Run: docker container inspect old-k8s-version-357479 --format={{.State.Status}}
	I1121 14:56:01.530046  467525 machine.go:94] provisionDockerMachine start ...
	I1121 14:56:01.530149  467525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-357479
	I1121 14:56:01.553301  467525 main.go:143] libmachine: Using SSH client type: native
	I1121 14:56:01.553790  467525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1121 14:56:01.553805  467525 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:56:01.554471  467525 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 14:56:04.700519  467525 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-357479
	
	I1121 14:56:04.700544  467525 ubuntu.go:182] provisioning hostname "old-k8s-version-357479"
	I1121 14:56:04.700611  467525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-357479
	I1121 14:56:04.719148  467525 main.go:143] libmachine: Using SSH client type: native
	I1121 14:56:04.719457  467525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1121 14:56:04.719473  467525 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-357479 && echo "old-k8s-version-357479" | sudo tee /etc/hostname
	I1121 14:56:04.886351  467525 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-357479
	
	I1121 14:56:04.886472  467525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-357479
	I1121 14:56:04.905786  467525 main.go:143] libmachine: Using SSH client type: native
	I1121 14:56:04.906150  467525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1121 14:56:04.906173  467525 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-357479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-357479/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-357479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:56:05.061082  467525 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:56:05.061175  467525 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 14:56:05.061224  467525 ubuntu.go:190] setting up certificates
	I1121 14:56:05.061252  467525 provision.go:84] configureAuth start
	I1121 14:56:05.061340  467525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-357479
	I1121 14:56:05.079611  467525 provision.go:143] copyHostCerts
	I1121 14:56:05.079695  467525 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 14:56:05.079709  467525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 14:56:05.079790  467525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 14:56:05.079888  467525 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 14:56:05.079893  467525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 14:56:05.079919  467525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 14:56:05.079981  467525 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 14:56:05.079985  467525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 14:56:05.080009  467525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 14:56:05.080062  467525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-357479 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-357479]
	I1121 14:56:05.760535  467525 provision.go:177] copyRemoteCerts
	I1121 14:56:05.760600  467525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:56:05.760645  467525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-357479
	I1121 14:56:05.777683  467525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/old-k8s-version-357479/id_rsa Username:docker}
	I1121 14:56:05.888159  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1121 14:56:05.908637  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:56:05.926757  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:56:05.944153  467525 provision.go:87] duration metric: took 882.864605ms to configureAuth
	I1121 14:56:05.944223  467525 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:56:05.944520  467525 config.go:182] Loaded profile config "old-k8s-version-357479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1121 14:56:05.944671  467525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-357479
	I1121 14:56:05.961968  467525 main.go:143] libmachine: Using SSH client type: native
	I1121 14:56:05.962282  467525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1121 14:56:05.962303  467525 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:56:06.256639  467525 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:56:06.256660  467525 machine.go:97] duration metric: took 4.726594706s to provisionDockerMachine
	I1121 14:56:06.256670  467525 client.go:176] duration metric: took 11.259463662s to LocalClient.Create
	I1121 14:56:06.256683  467525 start.go:167] duration metric: took 11.259725761s to libmachine.API.Create "old-k8s-version-357479"
	I1121 14:56:06.256691  467525 start.go:293] postStartSetup for "old-k8s-version-357479" (driver="docker")
	I1121 14:56:06.256701  467525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:56:06.256762  467525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:56:06.256821  467525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-357479
	I1121 14:56:06.274707  467525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/old-k8s-version-357479/id_rsa Username:docker}
	I1121 14:56:06.380637  467525 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:56:06.383895  467525 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:56:06.383935  467525 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:56:06.383947  467525 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 14:56:06.384014  467525 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 14:56:06.384101  467525 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 14:56:06.384207  467525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:56:06.391486  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:56:06.409814  467525 start.go:296] duration metric: took 153.108543ms for postStartSetup
	I1121 14:56:06.410198  467525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-357479
	I1121 14:56:06.427312  467525 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/config.json ...
	I1121 14:56:06.427612  467525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:56:06.427657  467525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-357479
	I1121 14:56:06.444716  467525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/old-k8s-version-357479/id_rsa Username:docker}
	I1121 14:56:06.541690  467525 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:56:06.547679  467525 start.go:128] duration metric: took 11.554825279s to createHost
	I1121 14:56:06.547705  467525 start.go:83] releasing machines lock for "old-k8s-version-357479", held for 11.554966803s
	I1121 14:56:06.547775  467525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-357479
	I1121 14:56:06.564708  467525 ssh_runner.go:195] Run: cat /version.json
	I1121 14:56:06.564777  467525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-357479
	I1121 14:56:06.565036  467525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:56:06.565095  467525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-357479
	I1121 14:56:06.582690  467525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/old-k8s-version-357479/id_rsa Username:docker}
	I1121 14:56:06.583985  467525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/old-k8s-version-357479/id_rsa Username:docker}
	I1121 14:56:06.770851  467525 ssh_runner.go:195] Run: systemctl --version
	I1121 14:56:06.777233  467525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:56:06.827453  467525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:56:06.831769  467525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:56:06.831846  467525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:56:06.861674  467525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 14:56:06.861698  467525 start.go:496] detecting cgroup driver to use...
	I1121 14:56:06.861748  467525 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:56:06.861803  467525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:56:06.880100  467525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:56:06.893031  467525 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:56:06.893134  467525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:56:06.909609  467525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:56:06.928782  467525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:56:07.037768  467525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:56:07.165661  467525 docker.go:234] disabling docker service ...
	I1121 14:56:07.165768  467525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:56:07.189686  467525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:56:07.205038  467525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:56:07.318609  467525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:56:07.443360  467525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:56:07.456141  467525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:56:07.471945  467525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1121 14:56:07.472013  467525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:56:07.481088  467525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 14:56:07.481172  467525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:56:07.490077  467525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:56:07.499266  467525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:56:07.509944  467525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:56:07.518552  467525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:56:07.527594  467525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:56:07.542851  467525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:56:07.553457  467525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:56:07.562061  467525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:56:07.580802  467525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:56:07.703894  467525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:56:07.865014  467525 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:56:07.865106  467525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:56:07.869148  467525 start.go:564] Will wait 60s for crictl version
	I1121 14:56:07.869270  467525 ssh_runner.go:195] Run: which crictl
	I1121 14:56:07.872795  467525 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:56:07.897848  467525 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:56:07.897992  467525 ssh_runner.go:195] Run: crio --version
	I1121 14:56:07.925841  467525 ssh_runner.go:195] Run: crio --version
	I1121 14:56:07.962174  467525 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1121 14:56:07.965216  467525 cli_runner.go:164] Run: docker network inspect old-k8s-version-357479 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:56:07.982180  467525 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 14:56:07.986116  467525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:56:07.996189  467525 kubeadm.go:884] updating cluster {Name:old-k8s-version-357479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-357479 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:56:07.996311  467525 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1121 14:56:07.996365  467525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:56:08.032440  467525 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:56:08.032465  467525 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:56:08.032528  467525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:56:08.058394  467525 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:56:08.058420  467525 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:56:08.058430  467525 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1121 14:56:08.058530  467525 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-357479 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-357479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:56:08.058619  467525 ssh_runner.go:195] Run: crio config
	I1121 14:56:08.134031  467525 cni.go:84] Creating CNI manager for ""
	I1121 14:56:08.134052  467525 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:56:08.134073  467525 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:56:08.134097  467525 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-357479 NodeName:old-k8s-version-357479 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:56:08.134239  467525 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-357479"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:56:08.134322  467525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1121 14:56:08.142458  467525 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:56:08.142540  467525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:56:08.152051  467525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1121 14:56:08.166409  467525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:56:08.180230  467525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1121 14:56:08.194354  467525 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:56:08.198359  467525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:56:08.208302  467525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:56:08.329117  467525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:56:08.347616  467525 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479 for IP: 192.168.76.2
	I1121 14:56:08.347643  467525 certs.go:195] generating shared ca certs ...
	I1121 14:56:08.347659  467525 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:56:08.347803  467525 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 14:56:08.347850  467525 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 14:56:08.347863  467525 certs.go:257] generating profile certs ...
	I1121 14:56:08.347919  467525 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.key
	I1121 14:56:08.347936  467525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt with IP's: []
	I1121 14:56:09.037817  467525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt ...
	I1121 14:56:09.037846  467525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: {Name:mk98e9bcd9aa5c612a0a0e8f8f66fd065be3726e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:56:09.038059  467525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.key ...
	I1121 14:56:09.038076  467525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.key: {Name:mk7a73dfa08008593048a8599891a9b55aae7a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:56:09.038167  467525 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/apiserver.key.15b34b71
	I1121 14:56:09.038189  467525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/apiserver.crt.15b34b71 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1121 14:56:09.857668  467525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/apiserver.crt.15b34b71 ...
	I1121 14:56:09.857703  467525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/apiserver.crt.15b34b71: {Name:mkb2652375190fac31d22a182eb2068d3b63c06a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:56:09.857912  467525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/apiserver.key.15b34b71 ...
	I1121 14:56:09.857928  467525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/apiserver.key.15b34b71: {Name:mka7b61c91db91cbdcccd567c38b95f29d967743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:56:09.858017  467525 certs.go:382] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/apiserver.crt.15b34b71 -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/apiserver.crt
	I1121 14:56:09.858097  467525 certs.go:386] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/apiserver.key.15b34b71 -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/apiserver.key
	I1121 14:56:09.858159  467525 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/proxy-client.key
	I1121 14:56:09.858173  467525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/proxy-client.crt with IP's: []
	I1121 14:56:10.033198  467525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/proxy-client.crt ...
	I1121 14:56:10.033234  467525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/proxy-client.crt: {Name:mk3a146f35c31b3063736231daef490d142bfcb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:56:10.033428  467525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/proxy-client.key ...
	I1121 14:56:10.033446  467525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/proxy-client.key: {Name:mk26b8289d200634651c7dfb0824a098272a1f9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:56:10.033625  467525 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 14:56:10.033676  467525 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 14:56:10.033691  467525 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:56:10.033718  467525 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:56:10.033748  467525 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:56:10.033778  467525 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 14:56:10.033829  467525 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:56:10.034462  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:56:10.067037  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:56:10.089766  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:56:10.110237  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:56:10.129268  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1121 14:56:10.149012  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:56:10.169799  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:56:10.189525  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:56:10.208581  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 14:56:10.226801  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 14:56:10.245417  467525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:56:10.263914  467525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:56:10.277295  467525 ssh_runner.go:195] Run: openssl version
	I1121 14:56:10.283842  467525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 14:56:10.292790  467525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 14:56:10.296789  467525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 14:56:10.296853  467525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 14:56:10.338333  467525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 14:56:10.346837  467525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 14:56:10.355608  467525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 14:56:10.359551  467525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 14:56:10.359648  467525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 14:56:10.403341  467525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:56:10.411995  467525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:56:10.420909  467525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:56:10.424553  467525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:56:10.424627  467525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:56:10.465599  467525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:56:10.474142  467525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:56:10.477673  467525 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:56:10.477725  467525 kubeadm.go:401] StartCluster: {Name:old-k8s-version-357479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-357479 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:56:10.477797  467525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:56:10.477855  467525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:56:10.505694  467525 cri.go:89] found id: ""
	I1121 14:56:10.505778  467525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:56:10.516975  467525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:56:10.524932  467525 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:56:10.525029  467525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:56:10.533039  467525 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:56:10.533062  467525 kubeadm.go:158] found existing configuration files:
	
	I1121 14:56:10.533116  467525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:56:10.543556  467525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:56:10.543644  467525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:56:10.550978  467525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:56:10.558603  467525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:56:10.558693  467525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:56:10.566348  467525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:56:10.574241  467525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:56:10.574315  467525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:56:10.582053  467525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:56:10.589629  467525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:56:10.589693  467525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:56:10.597140  467525 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:56:10.646292  467525 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1121 14:56:10.646358  467525 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:56:10.686658  467525 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:56:10.686737  467525 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 14:56:10.686779  467525 kubeadm.go:319] OS: Linux
	I1121 14:56:10.686831  467525 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:56:10.686885  467525 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 14:56:10.686938  467525 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:56:10.686992  467525 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:56:10.687047  467525 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:56:10.687101  467525 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:56:10.687168  467525 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:56:10.687223  467525 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:56:10.687275  467525 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 14:56:10.812293  467525 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:56:10.812482  467525 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:56:10.812588  467525 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1121 14:56:10.991106  467525 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:56:10.997278  467525 out.go:252]   - Generating certificates and keys ...
	I1121 14:56:10.997382  467525 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:56:10.997459  467525 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:56:11.342908  467525 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:56:11.897497  467525 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:56:12.513620  467525 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:56:12.778146  467525 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:56:13.439161  467525 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:56:13.439619  467525 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-357479] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:56:14.120720  467525 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:56:14.121037  467525 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-357479] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:56:15.153654  467525 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:56:15.688766  467525 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:56:16.429176  467525 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:56:16.429518  467525 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:56:16.626892  467525 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:56:17.653704  467525 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:56:18.482430  467525 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:56:19.279804  467525 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:56:19.280600  467525 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:56:19.283328  467525 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:56:19.287001  467525 out.go:252]   - Booting up control plane ...
	I1121 14:56:19.287139  467525 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:56:19.287260  467525 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:56:19.287374  467525 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:56:19.306204  467525 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:56:19.307866  467525 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:56:19.307928  467525 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:56:19.442891  467525 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1121 14:56:26.943097  467525 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.502342 seconds
	I1121 14:56:26.943228  467525 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:56:26.968232  467525 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:56:27.496881  467525 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:56:27.497144  467525 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-357479 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:56:28.019794  467525 kubeadm.go:319] [bootstrap-token] Using token: ekglmq.9m1lj8pzj75efh0s
	I1121 14:56:28.022802  467525 out.go:252]   - Configuring RBAC rules ...
	I1121 14:56:28.022934  467525 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:56:28.032438  467525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:56:28.044890  467525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:56:28.049658  467525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:56:28.054982  467525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:56:28.059130  467525 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:56:28.075471  467525 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:56:28.366962  467525 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:56:28.454849  467525 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:56:28.462647  467525 kubeadm.go:319] 
	I1121 14:56:28.462750  467525 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:56:28.462762  467525 kubeadm.go:319] 
	I1121 14:56:28.462844  467525 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:56:28.462868  467525 kubeadm.go:319] 
	I1121 14:56:28.462905  467525 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:56:28.462968  467525 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:56:28.463041  467525 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:56:28.463047  467525 kubeadm.go:319] 
	I1121 14:56:28.463113  467525 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:56:28.463127  467525 kubeadm.go:319] 
	I1121 14:56:28.463183  467525 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:56:28.463194  467525 kubeadm.go:319] 
	I1121 14:56:28.463250  467525 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:56:28.463333  467525 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:56:28.463409  467525 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:56:28.463417  467525 kubeadm.go:319] 
	I1121 14:56:28.463505  467525 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:56:28.463589  467525 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:56:28.463597  467525 kubeadm.go:319] 
	I1121 14:56:28.463685  467525 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ekglmq.9m1lj8pzj75efh0s \
	I1121 14:56:28.463797  467525 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 \
	I1121 14:56:28.463824  467525 kubeadm.go:319] 	--control-plane 
	I1121 14:56:28.463832  467525 kubeadm.go:319] 
	I1121 14:56:28.463921  467525 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:56:28.463930  467525 kubeadm.go:319] 
	I1121 14:56:28.464016  467525 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ekglmq.9m1lj8pzj75efh0s \
	I1121 14:56:28.464126  467525 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 
	I1121 14:56:28.470178  467525 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 14:56:28.470308  467525 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:56:28.470329  467525 cni.go:84] Creating CNI manager for ""
	I1121 14:56:28.470341  467525 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:56:28.473628  467525 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:56:28.476527  467525 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:56:28.480895  467525 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1121 14:56:28.480919  467525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:56:28.507421  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:56:29.515521  467525 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.008054824s)
	I1121 14:56:29.515571  467525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:56:29.515676  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:29.515695  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-357479 minikube.k8s.io/updated_at=2025_11_21T14_56_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=old-k8s-version-357479 minikube.k8s.io/primary=true
	I1121 14:56:29.647371  467525 ops.go:34] apiserver oom_adj: -16
	I1121 14:56:29.647472  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:30.148553  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:30.647583  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:31.147680  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:31.648080  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:32.147793  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:32.648494  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:33.147603  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:33.648156  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:34.147591  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:34.648056  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:35.148271  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:35.648282  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:36.148251  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:36.648489  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:37.148508  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:37.648601  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:38.147661  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:38.648337  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:39.147945  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:39.648000  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:40.147975  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:40.647999  467525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:56:40.762703  467525 kubeadm.go:1114] duration metric: took 11.247105065s to wait for elevateKubeSystemPrivileges
	I1121 14:56:40.762735  467525 kubeadm.go:403] duration metric: took 30.285007513s to StartCluster
	I1121 14:56:40.762752  467525 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:56:40.762814  467525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:56:40.763802  467525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:56:40.764019  467525 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:56:40.764175  467525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:56:40.764458  467525 config.go:182] Loaded profile config "old-k8s-version-357479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1121 14:56:40.764499  467525 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:56:40.764566  467525 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-357479"
	I1121 14:56:40.764582  467525 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-357479"
	I1121 14:56:40.764602  467525 host.go:66] Checking if "old-k8s-version-357479" exists ...
	I1121 14:56:40.765331  467525 cli_runner.go:164] Run: docker container inspect old-k8s-version-357479 --format={{.State.Status}}
	I1121 14:56:40.765788  467525 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-357479"
	I1121 14:56:40.765807  467525 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-357479"
	I1121 14:56:40.766062  467525 cli_runner.go:164] Run: docker container inspect old-k8s-version-357479 --format={{.State.Status}}
	I1121 14:56:40.767723  467525 out.go:179] * Verifying Kubernetes components...
	I1121 14:56:40.772712  467525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:56:40.807822  467525 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-357479"
	I1121 14:56:40.807864  467525 host.go:66] Checking if "old-k8s-version-357479" exists ...
	I1121 14:56:40.808336  467525 cli_runner.go:164] Run: docker container inspect old-k8s-version-357479 --format={{.State.Status}}
	I1121 14:56:40.808529  467525 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:56:40.814524  467525 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:56:40.814549  467525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:56:40.814616  467525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-357479
	I1121 14:56:40.848567  467525 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:56:40.848591  467525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:56:40.848658  467525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-357479
	I1121 14:56:40.871921  467525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/old-k8s-version-357479/id_rsa Username:docker}
	I1121 14:56:40.886188  467525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/old-k8s-version-357479/id_rsa Username:docker}
	I1121 14:56:41.182418  467525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:56:41.232285  467525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:56:41.232491  467525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:56:41.289051  467525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:56:42.290464  467525 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.05814097s)
	I1121 14:56:42.290512  467525 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1121 14:56:42.291855  467525 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.059336668s)
	I1121 14:56:42.293024  467525 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-357479" to be "Ready" ...
	I1121 14:56:42.731183  467525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.442095136s)
	I1121 14:56:42.736545  467525 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1121 14:56:42.741543  467525 addons.go:530] duration metric: took 1.977034428s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1121 14:56:42.797698  467525 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-357479" context rescaled to 1 replicas
	W1121 14:56:44.295974  467525 node_ready.go:57] node "old-k8s-version-357479" has "Ready":"False" status (will retry)
	W1121 14:56:46.296351  467525 node_ready.go:57] node "old-k8s-version-357479" has "Ready":"False" status (will retry)
	W1121 14:56:48.296534  467525 node_ready.go:57] node "old-k8s-version-357479" has "Ready":"False" status (will retry)
	W1121 14:56:50.797573  467525 node_ready.go:57] node "old-k8s-version-357479" has "Ready":"False" status (will retry)
	W1121 14:56:52.798864  467525 node_ready.go:57] node "old-k8s-version-357479" has "Ready":"False" status (will retry)
	W1121 14:56:54.800712  467525 node_ready.go:57] node "old-k8s-version-357479" has "Ready":"False" status (will retry)
	I1121 14:56:55.296547  467525 node_ready.go:49] node "old-k8s-version-357479" is "Ready"
	I1121 14:56:55.296579  467525 node_ready.go:38] duration metric: took 13.003521442s for node "old-k8s-version-357479" to be "Ready" ...
	I1121 14:56:55.296593  467525 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:56:55.296656  467525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:56:55.311175  467525 api_server.go:72] duration metric: took 14.547127943s to wait for apiserver process to appear ...
	I1121 14:56:55.311200  467525 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:56:55.311219  467525 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:56:55.320538  467525 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 14:56:55.322954  467525 api_server.go:141] control plane version: v1.28.0
	I1121 14:56:55.322981  467525 api_server.go:131] duration metric: took 11.774051ms to wait for apiserver health ...
	I1121 14:56:55.322990  467525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:56:55.327460  467525 system_pods.go:59] 8 kube-system pods found
	I1121 14:56:55.327493  467525 system_pods.go:61] "coredns-5dd5756b68-xt9qp" [e7a7e00e-fd76-4248-842e-930e4f4bc7c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:56:55.327501  467525 system_pods.go:61] "etcd-old-k8s-version-357479" [3dfc3d83-d000-4b27-873e-fa8393f5173e] Running
	I1121 14:56:55.327506  467525 system_pods.go:61] "kindnet-2bwt6" [f9be6f17-42ba-4ecc-b10d-b04a9b621450] Running
	I1121 14:56:55.327511  467525 system_pods.go:61] "kube-apiserver-old-k8s-version-357479" [992bf0e9-7b23-4dab-85dd-d30f6e3d9779] Running
	I1121 14:56:55.327516  467525 system_pods.go:61] "kube-controller-manager-old-k8s-version-357479" [6c4bc234-c983-45e3-afd4-51a8a7bb404d] Running
	I1121 14:56:55.327520  467525 system_pods.go:61] "kube-proxy-f2r9z" [2cbd7fde-1632-46fe-82b2-1ee3dff9f82d] Running
	I1121 14:56:55.327524  467525 system_pods.go:61] "kube-scheduler-old-k8s-version-357479" [bdf6c3cd-3613-407e-ae64-1e64a410f629] Running
	I1121 14:56:55.327530  467525 system_pods.go:61] "storage-provisioner" [bf3aa89f-4825-45d4-82a7-ae9bcca798b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:56:55.327535  467525 system_pods.go:74] duration metric: took 4.540207ms to wait for pod list to return data ...
	I1121 14:56:55.327543  467525 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:56:55.334723  467525 default_sa.go:45] found service account: "default"
	I1121 14:56:55.334799  467525 default_sa.go:55] duration metric: took 7.249655ms for default service account to be created ...
	I1121 14:56:55.334824  467525 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:56:55.340002  467525 system_pods.go:86] 8 kube-system pods found
	I1121 14:56:55.340113  467525 system_pods.go:89] "coredns-5dd5756b68-xt9qp" [e7a7e00e-fd76-4248-842e-930e4f4bc7c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:56:55.340138  467525 system_pods.go:89] "etcd-old-k8s-version-357479" [3dfc3d83-d000-4b27-873e-fa8393f5173e] Running
	I1121 14:56:55.340157  467525 system_pods.go:89] "kindnet-2bwt6" [f9be6f17-42ba-4ecc-b10d-b04a9b621450] Running
	I1121 14:56:55.340188  467525 system_pods.go:89] "kube-apiserver-old-k8s-version-357479" [992bf0e9-7b23-4dab-85dd-d30f6e3d9779] Running
	I1121 14:56:55.340210  467525 system_pods.go:89] "kube-controller-manager-old-k8s-version-357479" [6c4bc234-c983-45e3-afd4-51a8a7bb404d] Running
	I1121 14:56:55.340227  467525 system_pods.go:89] "kube-proxy-f2r9z" [2cbd7fde-1632-46fe-82b2-1ee3dff9f82d] Running
	I1121 14:56:55.340243  467525 system_pods.go:89] "kube-scheduler-old-k8s-version-357479" [bdf6c3cd-3613-407e-ae64-1e64a410f629] Running
	I1121 14:56:55.340268  467525 system_pods.go:89] "storage-provisioner" [bf3aa89f-4825-45d4-82a7-ae9bcca798b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:56:55.340311  467525 retry.go:31] will retry after 210.298611ms: missing components: kube-dns
	I1121 14:56:55.555677  467525 system_pods.go:86] 8 kube-system pods found
	I1121 14:56:55.555714  467525 system_pods.go:89] "coredns-5dd5756b68-xt9qp" [e7a7e00e-fd76-4248-842e-930e4f4bc7c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:56:55.555721  467525 system_pods.go:89] "etcd-old-k8s-version-357479" [3dfc3d83-d000-4b27-873e-fa8393f5173e] Running
	I1121 14:56:55.555727  467525 system_pods.go:89] "kindnet-2bwt6" [f9be6f17-42ba-4ecc-b10d-b04a9b621450] Running
	I1121 14:56:55.555770  467525 system_pods.go:89] "kube-apiserver-old-k8s-version-357479" [992bf0e9-7b23-4dab-85dd-d30f6e3d9779] Running
	I1121 14:56:55.555783  467525 system_pods.go:89] "kube-controller-manager-old-k8s-version-357479" [6c4bc234-c983-45e3-afd4-51a8a7bb404d] Running
	I1121 14:56:55.555802  467525 system_pods.go:89] "kube-proxy-f2r9z" [2cbd7fde-1632-46fe-82b2-1ee3dff9f82d] Running
	I1121 14:56:55.555817  467525 system_pods.go:89] "kube-scheduler-old-k8s-version-357479" [bdf6c3cd-3613-407e-ae64-1e64a410f629] Running
	I1121 14:56:55.555824  467525 system_pods.go:89] "storage-provisioner" [bf3aa89f-4825-45d4-82a7-ae9bcca798b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:56:55.555841  467525 retry.go:31] will retry after 254.874985ms: missing components: kube-dns
	I1121 14:56:55.815179  467525 system_pods.go:86] 8 kube-system pods found
	I1121 14:56:55.815214  467525 system_pods.go:89] "coredns-5dd5756b68-xt9qp" [e7a7e00e-fd76-4248-842e-930e4f4bc7c9] Running
	I1121 14:56:55.815222  467525 system_pods.go:89] "etcd-old-k8s-version-357479" [3dfc3d83-d000-4b27-873e-fa8393f5173e] Running
	I1121 14:56:55.815231  467525 system_pods.go:89] "kindnet-2bwt6" [f9be6f17-42ba-4ecc-b10d-b04a9b621450] Running
	I1121 14:56:55.815237  467525 system_pods.go:89] "kube-apiserver-old-k8s-version-357479" [992bf0e9-7b23-4dab-85dd-d30f6e3d9779] Running
	I1121 14:56:55.815246  467525 system_pods.go:89] "kube-controller-manager-old-k8s-version-357479" [6c4bc234-c983-45e3-afd4-51a8a7bb404d] Running
	I1121 14:56:55.815250  467525 system_pods.go:89] "kube-proxy-f2r9z" [2cbd7fde-1632-46fe-82b2-1ee3dff9f82d] Running
	I1121 14:56:55.815255  467525 system_pods.go:89] "kube-scheduler-old-k8s-version-357479" [bdf6c3cd-3613-407e-ae64-1e64a410f629] Running
	I1121 14:56:55.815260  467525 system_pods.go:89] "storage-provisioner" [bf3aa89f-4825-45d4-82a7-ae9bcca798b5] Running
	I1121 14:56:55.815283  467525 system_pods.go:126] duration metric: took 480.4416ms to wait for k8s-apps to be running ...
	I1121 14:56:55.815292  467525 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:56:55.815350  467525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:56:55.829651  467525 system_svc.go:56] duration metric: took 14.349377ms WaitForService to wait for kubelet
	I1121 14:56:55.829684  467525 kubeadm.go:587] duration metric: took 15.065641991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:56:55.829705  467525 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:56:55.832360  467525 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 14:56:55.832446  467525 node_conditions.go:123] node cpu capacity is 2
	I1121 14:56:55.832516  467525 node_conditions.go:105] duration metric: took 2.804071ms to run NodePressure ...
	I1121 14:56:55.832530  467525 start.go:242] waiting for startup goroutines ...
	I1121 14:56:55.832558  467525 start.go:247] waiting for cluster config update ...
	I1121 14:56:55.832577  467525 start.go:256] writing updated cluster config ...
	I1121 14:56:55.832902  467525 ssh_runner.go:195] Run: rm -f paused
	I1121 14:56:55.837278  467525 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:56:55.842859  467525 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-xt9qp" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:56:55.848609  467525 pod_ready.go:94] pod "coredns-5dd5756b68-xt9qp" is "Ready"
	I1121 14:56:55.848635  467525 pod_ready.go:86] duration metric: took 5.745694ms for pod "coredns-5dd5756b68-xt9qp" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:56:55.861679  467525 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-357479" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:56:55.867042  467525 pod_ready.go:94] pod "etcd-old-k8s-version-357479" is "Ready"
	I1121 14:56:55.867073  467525 pod_ready.go:86] duration metric: took 5.36169ms for pod "etcd-old-k8s-version-357479" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:56:55.870261  467525 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-357479" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:56:55.875440  467525 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-357479" is "Ready"
	I1121 14:56:55.875469  467525 pod_ready.go:86] duration metric: took 5.180896ms for pod "kube-apiserver-old-k8s-version-357479" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:56:55.878979  467525 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-357479" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:56:56.241840  467525 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-357479" is "Ready"
	I1121 14:56:56.241871  467525 pod_ready.go:86] duration metric: took 362.865019ms for pod "kube-controller-manager-old-k8s-version-357479" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:56:56.443147  467525 pod_ready.go:83] waiting for pod "kube-proxy-f2r9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:56:56.841824  467525 pod_ready.go:94] pod "kube-proxy-f2r9z" is "Ready"
	I1121 14:56:56.841855  467525 pod_ready.go:86] duration metric: took 398.621968ms for pod "kube-proxy-f2r9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:56:57.042891  467525 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-357479" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:56:57.442017  467525 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-357479" is "Ready"
	I1121 14:56:57.442046  467525 pod_ready.go:86] duration metric: took 399.126293ms for pod "kube-scheduler-old-k8s-version-357479" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:56:57.442059  467525 pod_ready.go:40] duration metric: took 1.60473982s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:56:57.494702  467525 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1121 14:56:57.497799  467525 out.go:203] 
	W1121 14:56:57.500737  467525 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1121 14:56:57.503677  467525 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1121 14:56:57.507518  467525 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-357479" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 14:56:55 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:55.262809862Z" level=info msg="Created container 5fbf3f4cd3135f0765715ae883cb4ef69ffd935833244d9aabd0b65edbaca808: kube-system/coredns-5dd5756b68-xt9qp/coredns" id=6bfceebf-cab7-4f0b-b6f7-849ce11dd5e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:56:55 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:55.264392848Z" level=info msg="Starting container: 5fbf3f4cd3135f0765715ae883cb4ef69ffd935833244d9aabd0b65edbaca808" id=db92650d-ec63-465d-9f7e-ca208ca5be58 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:56:55 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:55.26999066Z" level=info msg="Started container" PID=1947 containerID=5fbf3f4cd3135f0765715ae883cb4ef69ffd935833244d9aabd0b65edbaca808 description=kube-system/coredns-5dd5756b68-xt9qp/coredns id=db92650d-ec63-465d-9f7e-ca208ca5be58 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6326bff5c4cf0c20fbf291d6476e7d868cd4b0319ca92297b1b372c674ddd67
	Nov 21 14:56:58 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:58.02522189Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d665c788-575b-471c-9e60-d6a3dd7d491f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:56:58 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:58.025353363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:56:58 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:58.032083775Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c2ff84dc7f5ba80bec620dd416c86928696f24e52e661415a4bcd645093694d2 UID:fc05db92-ca5b-43e5-a59d-474356b5cfa5 NetNS:/var/run/netns/53be7ecd-f671-4504-a786-28e9f24dca87 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400145c798}] Aliases:map[]}"
	Nov 21 14:56:58 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:58.03225775Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 21 14:56:58 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:58.049621236Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c2ff84dc7f5ba80bec620dd416c86928696f24e52e661415a4bcd645093694d2 UID:fc05db92-ca5b-43e5-a59d-474356b5cfa5 NetNS:/var/run/netns/53be7ecd-f671-4504-a786-28e9f24dca87 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400145c798}] Aliases:map[]}"
	Nov 21 14:56:58 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:58.049782034Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 21 14:56:58 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:58.052663661Z" level=info msg="Ran pod sandbox c2ff84dc7f5ba80bec620dd416c86928696f24e52e661415a4bcd645093694d2 with infra container: default/busybox/POD" id=d665c788-575b-471c-9e60-d6a3dd7d491f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:56:58 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:58.054095883Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4bcc5ef1-f34d-402d-96d6-b804c7283c13 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:56:58 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:58.054341654Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4bcc5ef1-f34d-402d-96d6-b804c7283c13 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:56:58 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:58.054445721Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4bcc5ef1-f34d-402d-96d6-b804c7283c13 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:56:58 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:58.057581464Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d77e9866-5309-46cb-a09b-9f2004afc27a name=/runtime.v1.ImageService/PullImage
	Nov 21 14:56:58 old-k8s-version-357479 crio[838]: time="2025-11-21T14:56:58.060593456Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:57:00 old-k8s-version-357479 crio[838]: time="2025-11-21T14:57:00.070704887Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=d77e9866-5309-46cb-a09b-9f2004afc27a name=/runtime.v1.ImageService/PullImage
	Nov 21 14:57:00 old-k8s-version-357479 crio[838]: time="2025-11-21T14:57:00.073872753Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3d5e40ab-1c27-475b-9c85-9114ea822635 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:57:00 old-k8s-version-357479 crio[838]: time="2025-11-21T14:57:00.090564969Z" level=info msg="Creating container: default/busybox/busybox" id=cec3b10c-b74d-4d51-82a1-8ce377bc4bd9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:57:00 old-k8s-version-357479 crio[838]: time="2025-11-21T14:57:00.090713984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:57:00 old-k8s-version-357479 crio[838]: time="2025-11-21T14:57:00.148784345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:57:00 old-k8s-version-357479 crio[838]: time="2025-11-21T14:57:00.162365072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:57:00 old-k8s-version-357479 crio[838]: time="2025-11-21T14:57:00.224591037Z" level=info msg="Created container 96a77288ab1166888eea222e5ed4d26b568a8f628fa5c903ad8d5a16e0550ca1: default/busybox/busybox" id=cec3b10c-b74d-4d51-82a1-8ce377bc4bd9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:57:00 old-k8s-version-357479 crio[838]: time="2025-11-21T14:57:00.228263393Z" level=info msg="Starting container: 96a77288ab1166888eea222e5ed4d26b568a8f628fa5c903ad8d5a16e0550ca1" id=ccb6875a-eef9-424a-ba79-b65b13a87f95 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:57:00 old-k8s-version-357479 crio[838]: time="2025-11-21T14:57:00.254879045Z" level=info msg="Started container" PID=2007 containerID=96a77288ab1166888eea222e5ed4d26b568a8f628fa5c903ad8d5a16e0550ca1 description=default/busybox/busybox id=ccb6875a-eef9-424a-ba79-b65b13a87f95 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2ff84dc7f5ba80bec620dd416c86928696f24e52e661415a4bcd645093694d2
	Nov 21 14:57:06 old-k8s-version-357479 crio[838]: time="2025-11-21T14:57:06.943297107Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	96a77288ab116       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   c2ff84dc7f5ba       busybox                                          default
	5fbf3f4cd3135       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   b6326bff5c4cf       coredns-5dd5756b68-xt9qp                         kube-system
	1039854951c04       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   9f33acb6aaac1       storage-provisioner                              kube-system
	b2bb7631e5109       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   a92209ae0b116       kindnet-2bwt6                                    kube-system
	1dc20876c1735       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   e4c91b4c10010       kube-proxy-f2r9z                                 kube-system
	6eba3a5d7981c       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      47 seconds ago      Running             kube-apiserver            0                   0c1ec613e7878       kube-apiserver-old-k8s-version-357479            kube-system
	1de04b20530c6       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      47 seconds ago      Running             kube-controller-manager   0                   c8f4eb9e264e5       kube-controller-manager-old-k8s-version-357479   kube-system
	f787ded684603       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      47 seconds ago      Running             kube-scheduler            0                   53f51e3d981ce       kube-scheduler-old-k8s-version-357479            kube-system
	7af9c8cb68d6c       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      47 seconds ago      Running             etcd                      0                   13b34af755545       etcd-old-k8s-version-357479                      kube-system
	
	
	==> coredns [5fbf3f4cd3135f0765715ae883cb4ef69ffd935833244d9aabd0b65edbaca808] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39798 - 20009 "HINFO IN 376469042090205082.4829924559339539004. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022586286s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-357479
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-357479
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=old-k8s-version-357479
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_56_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:56:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-357479
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:56:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:56:59 +0000   Fri, 21 Nov 2025 14:56:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:56:59 +0000   Fri, 21 Nov 2025 14:56:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:56:59 +0000   Fri, 21 Nov 2025 14:56:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:56:59 +0000   Fri, 21 Nov 2025 14:56:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-357479
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                ab0023cc-284b-4a0e-ae5e-24c43711c856
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-xt9qp                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-357479                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-2bwt6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-357479             250m (12%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-357479    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-f2r9z                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-357479             100m (5%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-357479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-357479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-357479 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-357479 event: Registered Node old-k8s-version-357479 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-357479 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 14:29] overlayfs: idmapped layers are currently not supported
	[Nov21 14:33] kauditd_printk_skb: 8 callbacks suppressed
	[ +39.333625] overlayfs: idmapped layers are currently not supported
	[Nov21 14:34] overlayfs: idmapped layers are currently not supported
	[Nov21 14:35] overlayfs: idmapped layers are currently not supported
	[Nov21 14:36] overlayfs: idmapped layers are currently not supported
	[Nov21 14:37] overlayfs: idmapped layers are currently not supported
	[Nov21 14:39] overlayfs: idmapped layers are currently not supported
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7af9c8cb68d6c2c72eddec255676b0a85d70039905884c3961b9cc16330dcf70] <==
	{"level":"info","ts":"2025-11-21T14:56:20.610688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-21T14:56:20.610841Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-21T14:56:20.612724Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-21T14:56:20.612871Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-21T14:56:20.61297Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-21T14:56:20.613781Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-21T14:56:20.613857Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-21T14:56:21.096247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-21T14:56:21.096402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-21T14:56:21.09647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-21T14:56:21.096508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-21T14:56:21.096551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-21T14:56:21.096587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-21T14:56:21.096632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-21T14:56:21.097999Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:56:21.099535Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-357479 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-21T14:56:21.099609Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:56:21.099952Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:56:21.107157Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-21T14:56:21.108173Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-21T14:56:21.111963Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:56:21.112097Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:56:21.112153Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:56:21.112217Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-21T14:56:21.112248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:57:08 up  2:39,  0 user,  load average: 2.33, 2.90, 2.46
	Linux old-k8s-version-357479 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b2bb7631e5109c27e929af17042c7189b67c3481c7627561a44a441cc20c2c1d] <==
	I1121 14:56:44.406750       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:56:44.501130       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 14:56:44.501387       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:56:44.501408       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:56:44.501427       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:56:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:56:44.702137       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:56:44.702214       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:56:44.702246       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:56:44.703087       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:56:44.902754       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:56:44.902889       1 metrics.go:72] Registering metrics
	I1121 14:56:44.902982       1 controller.go:711] "Syncing nftables rules"
	I1121 14:56:54.710491       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:56:54.710548       1 main.go:301] handling current node
	I1121 14:57:04.702983       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:57:04.703029       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6eba3a5d7981cde6119cd67d9b0284dcc9389ebd1d2712f07724b7f113915559] <==
	I1121 14:56:25.171372       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1121 14:56:25.190111       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1121 14:56:25.204694       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1121 14:56:25.206656       1 shared_informer.go:318] Caches are synced for configmaps
	I1121 14:56:25.208220       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 14:56:25.212339       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1121 14:56:25.216887       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1121 14:56:25.216965       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1121 14:56:25.218254       1 controller.go:624] quota admission added evaluator for: namespaces
	I1121 14:56:25.402456       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:56:25.811845       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:56:25.828291       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:56:25.828316       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:56:26.626933       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:56:26.719221       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:56:26.849083       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:56:26.860358       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1121 14:56:26.861598       1 controller.go:624] quota admission added evaluator for: endpoints
	I1121 14:56:26.866508       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:56:27.051102       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1121 14:56:28.351379       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1121 14:56:28.365286       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:56:28.376912       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1121 14:56:40.408365       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1121 14:56:40.875262       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [1de04b20530c66c6be8b3c1d6147e9f431586fbc5d4d1a27122f1abe10d04229] <==
	I1121 14:56:40.100842       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-357479"
	I1121 14:56:40.101092       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1121 14:56:40.101338       1 event.go:307] "Event occurred" object="old-k8s-version-357479" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-357479 event: Registered Node old-k8s-version-357479 in Controller"
	I1121 14:56:40.123534       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:56:40.415088       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1121 14:56:40.478424       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:56:40.487623       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:56:40.487673       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1121 14:56:40.935835       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2bwt6"
	I1121 14:56:40.942757       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f2r9z"
	I1121 14:56:41.076483       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xt9qp"
	I1121 14:56:41.119600       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wshmv"
	I1121 14:56:41.176908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="763.194481ms"
	I1121 14:56:41.258870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.897436ms"
	I1121 14:56:41.258964       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.934µs"
	I1121 14:56:42.327569       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1121 14:56:42.361662       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-wshmv"
	I1121 14:56:42.374691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.459119ms"
	I1121 14:56:42.385191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.453691ms"
	I1121 14:56:42.385281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.767µs"
	I1121 14:56:54.870920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.535µs"
	I1121 14:56:54.891347       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.798µs"
	I1121 14:56:55.122040       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1121 14:56:55.759258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.103122ms"
	I1121 14:56:55.759423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.721µs"
	
	
	==> kube-proxy [1dc20876c1735b2dfebba6011bcd98ab98b7ab2f78a676cfbd403bdb27056317] <==
	I1121 14:56:41.535830       1 server_others.go:69] "Using iptables proxy"
	I1121 14:56:41.554093       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1121 14:56:41.641512       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:56:41.645037       1 server_others.go:152] "Using iptables Proxier"
	I1121 14:56:41.645092       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1121 14:56:41.645100       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1121 14:56:41.645125       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1121 14:56:41.645467       1 server.go:846] "Version info" version="v1.28.0"
	I1121 14:56:41.645486       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:56:41.648252       1 config.go:188] "Starting service config controller"
	I1121 14:56:41.648269       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1121 14:56:41.648286       1 config.go:97] "Starting endpoint slice config controller"
	I1121 14:56:41.648290       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1121 14:56:41.648688       1 config.go:315] "Starting node config controller"
	I1121 14:56:41.648696       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1121 14:56:41.748567       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1121 14:56:41.748620       1 shared_informer.go:318] Caches are synced for service config
	I1121 14:56:41.752115       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f787ded6846033ffe5f323a183032a1606761efd36a6c17ed29c193d749f4660] <==
	I1121 14:56:24.406563       1 serving.go:348] Generated self-signed cert in-memory
	W1121 14:56:26.652174       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 14:56:26.652271       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 14:56:26.652305       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 14:56:26.652334       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 14:56:26.673536       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1121 14:56:26.673632       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:56:26.675834       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1121 14:56:26.676017       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1121 14:56:26.676138       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:56:26.678584       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W1121 14:56:26.684323       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1121 14:56:26.684454       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1121 14:56:27.779444       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 21 14:56:41 old-k8s-version-357479 kubelet[1392]: I1121 14:56:41.012514    1392 topology_manager.go:215] "Topology Admit Handler" podUID="f9be6f17-42ba-4ecc-b10d-b04a9b621450" podNamespace="kube-system" podName="kindnet-2bwt6"
	Nov 21 14:56:41 old-k8s-version-357479 kubelet[1392]: I1121 14:56:41.036839    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2cbd7fde-1632-46fe-82b2-1ee3dff9f82d-kube-proxy\") pod \"kube-proxy-f2r9z\" (UID: \"2cbd7fde-1632-46fe-82b2-1ee3dff9f82d\") " pod="kube-system/kube-proxy-f2r9z"
	Nov 21 14:56:41 old-k8s-version-357479 kubelet[1392]: I1121 14:56:41.036911    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9be6f17-42ba-4ecc-b10d-b04a9b621450-xtables-lock\") pod \"kindnet-2bwt6\" (UID: \"f9be6f17-42ba-4ecc-b10d-b04a9b621450\") " pod="kube-system/kindnet-2bwt6"
	Nov 21 14:56:41 old-k8s-version-357479 kubelet[1392]: I1121 14:56:41.036941    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lkf7\" (UniqueName: \"kubernetes.io/projected/f9be6f17-42ba-4ecc-b10d-b04a9b621450-kube-api-access-7lkf7\") pod \"kindnet-2bwt6\" (UID: \"f9be6f17-42ba-4ecc-b10d-b04a9b621450\") " pod="kube-system/kindnet-2bwt6"
	Nov 21 14:56:41 old-k8s-version-357479 kubelet[1392]: I1121 14:56:41.036983    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9be6f17-42ba-4ecc-b10d-b04a9b621450-lib-modules\") pod \"kindnet-2bwt6\" (UID: \"f9be6f17-42ba-4ecc-b10d-b04a9b621450\") " pod="kube-system/kindnet-2bwt6"
	Nov 21 14:56:41 old-k8s-version-357479 kubelet[1392]: I1121 14:56:41.037020    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cbd7fde-1632-46fe-82b2-1ee3dff9f82d-xtables-lock\") pod \"kube-proxy-f2r9z\" (UID: \"2cbd7fde-1632-46fe-82b2-1ee3dff9f82d\") " pod="kube-system/kube-proxy-f2r9z"
	Nov 21 14:56:41 old-k8s-version-357479 kubelet[1392]: I1121 14:56:41.037055    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cbd7fde-1632-46fe-82b2-1ee3dff9f82d-lib-modules\") pod \"kube-proxy-f2r9z\" (UID: \"2cbd7fde-1632-46fe-82b2-1ee3dff9f82d\") " pod="kube-system/kube-proxy-f2r9z"
	Nov 21 14:56:41 old-k8s-version-357479 kubelet[1392]: I1121 14:56:41.037081    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f9be6f17-42ba-4ecc-b10d-b04a9b621450-cni-cfg\") pod \"kindnet-2bwt6\" (UID: \"f9be6f17-42ba-4ecc-b10d-b04a9b621450\") " pod="kube-system/kindnet-2bwt6"
	Nov 21 14:56:41 old-k8s-version-357479 kubelet[1392]: I1121 14:56:41.037130    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fzw\" (UniqueName: \"kubernetes.io/projected/2cbd7fde-1632-46fe-82b2-1ee3dff9f82d-kube-api-access-z5fzw\") pod \"kube-proxy-f2r9z\" (UID: \"2cbd7fde-1632-46fe-82b2-1ee3dff9f82d\") " pod="kube-system/kube-proxy-f2r9z"
	Nov 21 14:56:41 old-k8s-version-357479 kubelet[1392]: W1121 14:56:41.343421    1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/crio-a92209ae0b116b359034b69100ded983f406b5b5f8d415a1a7fa3e416a9229a4 WatchSource:0}: Error finding container a92209ae0b116b359034b69100ded983f406b5b5f8d415a1a7fa3e416a9229a4: Status 404 returned error can't find the container with id a92209ae0b116b359034b69100ded983f406b5b5f8d415a1a7fa3e416a9229a4
	Nov 21 14:56:41 old-k8s-version-357479 kubelet[1392]: I1121 14:56:41.697474    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-f2r9z" podStartSLOduration=1.694917605 podCreationTimestamp="2025-11-21 14:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:56:41.694728269 +0000 UTC m=+13.376831137" watchObservedRunningTime="2025-11-21 14:56:41.694917605 +0000 UTC m=+13.377020457"
	Nov 21 14:56:54 old-k8s-version-357479 kubelet[1392]: I1121 14:56:54.834370    1392 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 21 14:56:54 old-k8s-version-357479 kubelet[1392]: I1121 14:56:54.869831    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2bwt6" podStartSLOduration=11.913632974 podCreationTimestamp="2025-11-21 14:56:40 +0000 UTC" firstStartedPulling="2025-11-21 14:56:41.352567195 +0000 UTC m=+13.034670055" lastFinishedPulling="2025-11-21 14:56:44.308676528 +0000 UTC m=+15.990779388" observedRunningTime="2025-11-21 14:56:44.727457317 +0000 UTC m=+16.409560169" watchObservedRunningTime="2025-11-21 14:56:54.869742307 +0000 UTC m=+26.551845167"
	Nov 21 14:56:54 old-k8s-version-357479 kubelet[1392]: I1121 14:56:54.870285    1392 topology_manager.go:215] "Topology Admit Handler" podUID="e7a7e00e-fd76-4248-842e-930e4f4bc7c9" podNamespace="kube-system" podName="coredns-5dd5756b68-xt9qp"
	Nov 21 14:56:54 old-k8s-version-357479 kubelet[1392]: I1121 14:56:54.875259    1392 topology_manager.go:215] "Topology Admit Handler" podUID="bf3aa89f-4825-45d4-82a7-ae9bcca798b5" podNamespace="kube-system" podName="storage-provisioner"
	Nov 21 14:56:54 old-k8s-version-357479 kubelet[1392]: I1121 14:56:54.954316    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bf3aa89f-4825-45d4-82a7-ae9bcca798b5-tmp\") pod \"storage-provisioner\" (UID: \"bf3aa89f-4825-45d4-82a7-ae9bcca798b5\") " pod="kube-system/storage-provisioner"
	Nov 21 14:56:54 old-k8s-version-357479 kubelet[1392]: I1121 14:56:54.954558    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7a7e00e-fd76-4248-842e-930e4f4bc7c9-config-volume\") pod \"coredns-5dd5756b68-xt9qp\" (UID: \"e7a7e00e-fd76-4248-842e-930e4f4bc7c9\") " pod="kube-system/coredns-5dd5756b68-xt9qp"
	Nov 21 14:56:54 old-k8s-version-357479 kubelet[1392]: I1121 14:56:54.954662    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm67q\" (UniqueName: \"kubernetes.io/projected/bf3aa89f-4825-45d4-82a7-ae9bcca798b5-kube-api-access-pm67q\") pod \"storage-provisioner\" (UID: \"bf3aa89f-4825-45d4-82a7-ae9bcca798b5\") " pod="kube-system/storage-provisioner"
	Nov 21 14:56:54 old-k8s-version-357479 kubelet[1392]: I1121 14:56:54.954702    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpthp\" (UniqueName: \"kubernetes.io/projected/e7a7e00e-fd76-4248-842e-930e4f4bc7c9-kube-api-access-gpthp\") pod \"coredns-5dd5756b68-xt9qp\" (UID: \"e7a7e00e-fd76-4248-842e-930e4f4bc7c9\") " pod="kube-system/coredns-5dd5756b68-xt9qp"
	Nov 21 14:56:55 old-k8s-version-357479 kubelet[1392]: W1121 14:56:55.187239    1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/crio-9f33acb6aaac1493408f132d84a14c30cf770b1df61b58558904fc11b6235658 WatchSource:0}: Error finding container 9f33acb6aaac1493408f132d84a14c30cf770b1df61b58558904fc11b6235658: Status 404 returned error can't find the container with id 9f33acb6aaac1493408f132d84a14c30cf770b1df61b58558904fc11b6235658
	Nov 21 14:56:55 old-k8s-version-357479 kubelet[1392]: W1121 14:56:55.211219    1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/crio-b6326bff5c4cf0c20fbf291d6476e7d868cd4b0319ca92297b1b372c674ddd67 WatchSource:0}: Error finding container b6326bff5c4cf0c20fbf291d6476e7d868cd4b0319ca92297b1b372c674ddd67: Status 404 returned error can't find the container with id b6326bff5c4cf0c20fbf291d6476e7d868cd4b0319ca92297b1b372c674ddd67
	Nov 21 14:56:55 old-k8s-version-357479 kubelet[1392]: I1121 14:56:55.745892    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.745846768 podCreationTimestamp="2025-11-21 14:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:56:55.732082465 +0000 UTC m=+27.414185325" watchObservedRunningTime="2025-11-21 14:56:55.745846768 +0000 UTC m=+27.427949628"
	Nov 21 14:56:57 old-k8s-version-357479 kubelet[1392]: I1121 14:56:57.720638    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xt9qp" podStartSLOduration=17.7205959 podCreationTimestamp="2025-11-21 14:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:56:55.746712518 +0000 UTC m=+27.428815378" watchObservedRunningTime="2025-11-21 14:56:57.7205959 +0000 UTC m=+29.402698752"
	Nov 21 14:56:57 old-k8s-version-357479 kubelet[1392]: I1121 14:56:57.720918    1392 topology_manager.go:215] "Topology Admit Handler" podUID="fc05db92-ca5b-43e5-a59d-474356b5cfa5" podNamespace="default" podName="busybox"
	Nov 21 14:56:57 old-k8s-version-357479 kubelet[1392]: I1121 14:56:57.772920    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr57p\" (UniqueName: \"kubernetes.io/projected/fc05db92-ca5b-43e5-a59d-474356b5cfa5-kube-api-access-zr57p\") pod \"busybox\" (UID: \"fc05db92-ca5b-43e5-a59d-474356b5cfa5\") " pod="default/busybox"
	
	
	==> storage-provisioner [1039854951c04669679a3ca482624c0c8c1270d3bd210687afd269fef10e9f4b] <==
	I1121 14:56:55.272690       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:56:55.304546       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:56:55.304677       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1121 14:56:55.328507       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:56:55.328782       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-357479_77b6c019-bc49-4fc3-9a71-38b470c186b9!
	I1121 14:56:55.329959       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c725c8a8-b445-4999-8998-79842ae68d9e", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-357479_77b6c019-bc49-4fc3-9a71-38b470c186b9 became leader
	I1121 14:56:55.429276       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-357479_77b6c019-bc49-4fc3-9a71-38b470c186b9!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-357479 -n old-k8s-version-357479
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-357479 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (8.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-357479 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-357479 --alsologtostderr -v=1: exit status 80 (2.529919975s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-357479 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:58:25.374028  474162 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:58:25.374265  474162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:58:25.374293  474162 out.go:374] Setting ErrFile to fd 2...
	I1121 14:58:25.374330  474162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:58:25.374707  474162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:58:25.375089  474162 out.go:368] Setting JSON to false
	I1121 14:58:25.375141  474162 mustload.go:66] Loading cluster: old-k8s-version-357479
	I1121 14:58:25.375578  474162 config.go:182] Loaded profile config "old-k8s-version-357479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1121 14:58:25.376192  474162 cli_runner.go:164] Run: docker container inspect old-k8s-version-357479 --format={{.State.Status}}
	I1121 14:58:25.443517  474162 host.go:66] Checking if "old-k8s-version-357479" exists ...
	I1121 14:58:25.443823  474162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:58:25.608248  474162 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-21 14:58:25.598399667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:58:25.609035  474162 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-357479 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1121 14:58:25.612875  474162 out.go:179] * Pausing node old-k8s-version-357479 ... 
	I1121 14:58:25.615726  474162 host.go:66] Checking if "old-k8s-version-357479" exists ...
	I1121 14:58:25.616047  474162 ssh_runner.go:195] Run: systemctl --version
	I1121 14:58:25.616090  474162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-357479
	I1121 14:58:25.650971  474162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/old-k8s-version-357479/id_rsa Username:docker}
	I1121 14:58:25.760015  474162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:58:25.788953  474162 pause.go:52] kubelet running: true
	I1121 14:58:25.789019  474162 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:58:26.208469  474162 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:58:26.208555  474162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:58:26.347332  474162 cri.go:89] found id: "30b10a34043583ba1f78c5ab76dd01d93f62c25e440b077d1355ec25ade82c83"
	I1121 14:58:26.347395  474162 cri.go:89] found id: "8a8d5631e51cacf53d506fa4a04f85f6215a24372cb5bc9461c77a553351e692"
	I1121 14:58:26.347424  474162 cri.go:89] found id: "a7380a6949acd872625d6f9c045103c7364a54a4fa0520562623923d643e8d9d"
	I1121 14:58:26.347441  474162 cri.go:89] found id: "ea49e692a21cb120384182095ffa391f2fb8bcf220d001779c9acdf6bc494b84"
	I1121 14:58:26.347472  474162 cri.go:89] found id: "9c2c474dad29f36153907aecb633e2ce285822491863ba62f7db3147f7a895c6"
	I1121 14:58:26.347492  474162 cri.go:89] found id: "96c28d59af9dc044a74c9ec3836f37a7c38007a450b148fbbd3efe7dfe087216"
	I1121 14:58:26.347508  474162 cri.go:89] found id: "069138de88a3a043bcedc2015132da995f1f67098719cfcccc8ad8edadcf1c6b"
	I1121 14:58:26.347522  474162 cri.go:89] found id: "53c68e0361bdc5178c617b0c2656901eb2b40db3c48b84123beeedd42f17b52b"
	I1121 14:58:26.347538  474162 cri.go:89] found id: "b0b41441d2ebefe39ee2acf353a3ca206cd126618c88325038515b9d85d7f838"
	I1121 14:58:26.347579  474162 cri.go:89] found id: "ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc"
	I1121 14:58:26.347595  474162 cri.go:89] found id: "4265cf599c2b9bd90aebc621ed98272108d9ad03647acec31d485ea27a3b7d54"
	I1121 14:58:26.347609  474162 cri.go:89] found id: ""
	I1121 14:58:26.347684  474162 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:58:26.373543  474162 retry.go:31] will retry after 157.647361ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:58:26Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:58:26.531995  474162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:58:26.552856  474162 pause.go:52] kubelet running: false
	I1121 14:58:26.552971  474162 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:58:26.849169  474162 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:58:26.849295  474162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:58:26.978461  474162 cri.go:89] found id: "30b10a34043583ba1f78c5ab76dd01d93f62c25e440b077d1355ec25ade82c83"
	I1121 14:58:26.978522  474162 cri.go:89] found id: "8a8d5631e51cacf53d506fa4a04f85f6215a24372cb5bc9461c77a553351e692"
	I1121 14:58:26.978550  474162 cri.go:89] found id: "a7380a6949acd872625d6f9c045103c7364a54a4fa0520562623923d643e8d9d"
	I1121 14:58:26.978567  474162 cri.go:89] found id: "ea49e692a21cb120384182095ffa391f2fb8bcf220d001779c9acdf6bc494b84"
	I1121 14:58:26.978583  474162 cri.go:89] found id: "9c2c474dad29f36153907aecb633e2ce285822491863ba62f7db3147f7a895c6"
	I1121 14:58:26.978615  474162 cri.go:89] found id: "96c28d59af9dc044a74c9ec3836f37a7c38007a450b148fbbd3efe7dfe087216"
	I1121 14:58:26.978631  474162 cri.go:89] found id: "069138de88a3a043bcedc2015132da995f1f67098719cfcccc8ad8edadcf1c6b"
	I1121 14:58:26.978647  474162 cri.go:89] found id: "53c68e0361bdc5178c617b0c2656901eb2b40db3c48b84123beeedd42f17b52b"
	I1121 14:58:26.978662  474162 cri.go:89] found id: "b0b41441d2ebefe39ee2acf353a3ca206cd126618c88325038515b9d85d7f838"
	I1121 14:58:26.978693  474162 cri.go:89] found id: "ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc"
	I1121 14:58:26.978714  474162 cri.go:89] found id: "4265cf599c2b9bd90aebc621ed98272108d9ad03647acec31d485ea27a3b7d54"
	I1121 14:58:26.978730  474162 cri.go:89] found id: ""
	I1121 14:58:26.978815  474162 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:58:26.996707  474162 retry.go:31] will retry after 280.502777ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:58:26Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:58:27.278208  474162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:58:27.297447  474162 pause.go:52] kubelet running: false
	I1121 14:58:27.297532  474162 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:58:27.588278  474162 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:58:27.588353  474162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:58:27.713666  474162 cri.go:89] found id: "30b10a34043583ba1f78c5ab76dd01d93f62c25e440b077d1355ec25ade82c83"
	I1121 14:58:27.713687  474162 cri.go:89] found id: "8a8d5631e51cacf53d506fa4a04f85f6215a24372cb5bc9461c77a553351e692"
	I1121 14:58:27.713692  474162 cri.go:89] found id: "a7380a6949acd872625d6f9c045103c7364a54a4fa0520562623923d643e8d9d"
	I1121 14:58:27.713696  474162 cri.go:89] found id: "ea49e692a21cb120384182095ffa391f2fb8bcf220d001779c9acdf6bc494b84"
	I1121 14:58:27.713700  474162 cri.go:89] found id: "9c2c474dad29f36153907aecb633e2ce285822491863ba62f7db3147f7a895c6"
	I1121 14:58:27.713703  474162 cri.go:89] found id: "96c28d59af9dc044a74c9ec3836f37a7c38007a450b148fbbd3efe7dfe087216"
	I1121 14:58:27.713706  474162 cri.go:89] found id: "069138de88a3a043bcedc2015132da995f1f67098719cfcccc8ad8edadcf1c6b"
	I1121 14:58:27.713709  474162 cri.go:89] found id: "53c68e0361bdc5178c617b0c2656901eb2b40db3c48b84123beeedd42f17b52b"
	I1121 14:58:27.713712  474162 cri.go:89] found id: "b0b41441d2ebefe39ee2acf353a3ca206cd126618c88325038515b9d85d7f838"
	I1121 14:58:27.713718  474162 cri.go:89] found id: "ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc"
	I1121 14:58:27.713722  474162 cri.go:89] found id: "4265cf599c2b9bd90aebc621ed98272108d9ad03647acec31d485ea27a3b7d54"
	I1121 14:58:27.713729  474162 cri.go:89] found id: ""
	I1121 14:58:27.713778  474162 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:58:27.730283  474162 out.go:203] 
	W1121 14:58:27.733265  474162 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:58:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:58:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:58:27.733289  474162 out.go:285] * 
	* 
	W1121 14:58:27.739133  474162 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:58:27.744439  474162 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-357479 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-357479
helpers_test.go:243: (dbg) docker inspect old-k8s-version-357479:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19",
	        "Created": "2025-11-21T14:56:00.807071627Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471267,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:57:22.002311884Z",
	            "FinishedAt": "2025-11-21T14:57:21.1839804Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/hostname",
	        "HostsPath": "/var/lib/docker/containers/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/hosts",
	        "LogPath": "/var/lib/docker/containers/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19-json.log",
	        "Name": "/old-k8s-version-357479",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-357479:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-357479",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19",
	                "LowerDir": "/var/lib/docker/overlay2/4b4d0ac394452156ec3837780780e8daabe7e0050a0fe74add3a28c8e62b67e7-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b4d0ac394452156ec3837780780e8daabe7e0050a0fe74add3a28c8e62b67e7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b4d0ac394452156ec3837780780e8daabe7e0050a0fe74add3a28c8e62b67e7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b4d0ac394452156ec3837780780e8daabe7e0050a0fe74add3a28c8e62b67e7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-357479",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-357479/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-357479",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-357479",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-357479",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df2ffe6c5cb4cec39d0dba2fcdb1540b2cab1660d707ca671a4e9cad3964f034",
	            "SandboxKey": "/var/run/docker/netns/df2ffe6c5cb4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-357479": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:26:1b:05:4f:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7422cf90b3020c38636ee057796200a23d8dcb6121ac9112e0ea63b06e8fa49d",
	                    "EndpointID": "744a4154c6562745873f08157e931939e90e7a1a8ee8aab3d7a8942a1008efaa",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-357479",
	                        "0fe519ab5875"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-357479 -n old-k8s-version-357479
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-357479 -n old-k8s-version-357479: exit status 2 (498.400093ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-357479 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-357479 logs -n 25: (1.85617686s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-609503 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo containerd config dump                                                                                                                                                                                                  │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo crio config                                                                                                                                                                                                             │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ delete  │ -p cilium-609503                                                                                                                                                                                                                              │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │ 21 Nov 25 14:54 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-304879   │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │ 21 Nov 25 14:55 UTC │
	│ delete  │ -p force-systemd-env-360486                                                                                                                                                                                                                   │ force-systemd-env-360486 │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ start   │ -p cert-options-605096 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-605096      │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ ssh     │ cert-options-605096 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-605096      │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ ssh     │ -p cert-options-605096 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-605096      │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ delete  │ -p cert-options-605096                                                                                                                                                                                                                        │ cert-options-605096      │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-357479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │                     │
	│ stop    │ -p old-k8s-version-357479 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-357479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:57 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-304879   │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │                     │
	│ image   │ old-k8s-version-357479 image list --format=json                                                                                                                                                                                               │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ pause   │ -p old-k8s-version-357479 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:58:14
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:58:14.154165  473269 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:58:14.154285  473269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:58:14.154289  473269 out.go:374] Setting ErrFile to fd 2...
	I1121 14:58:14.154293  473269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:58:14.154557  473269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:58:14.154917  473269 out.go:368] Setting JSON to false
	I1121 14:58:14.155955  473269 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9646,"bootTime":1763727448,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 14:58:14.156015  473269 start.go:143] virtualization:  
	I1121 14:58:14.159746  473269 out.go:179] * [cert-expiration-304879] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:58:14.163644  473269 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:58:14.163720  473269 notify.go:221] Checking for updates...
	I1121 14:58:14.167907  473269 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:58:14.170876  473269 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:58:14.173720  473269 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 14:58:14.176679  473269 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:58:14.179700  473269 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:58:14.183125  473269 config.go:182] Loaded profile config "cert-expiration-304879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:58:14.183655  473269 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:58:14.216773  473269 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:58:14.216882  473269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:58:14.285030  473269 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 14:58:14.274651669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:58:14.285130  473269 docker.go:319] overlay module found
	I1121 14:58:14.288216  473269 out.go:179] * Using the docker driver based on existing profile
	I1121 14:58:14.291132  473269 start.go:309] selected driver: docker
	I1121 14:58:14.291143  473269 start.go:930] validating driver "docker" against &{Name:cert-expiration-304879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-304879 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:58:14.291220  473269 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:58:14.291986  473269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:58:14.358635  473269 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 14:58:14.34900988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:58:14.358934  473269 cni.go:84] Creating CNI manager for ""
	I1121 14:58:14.358987  473269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:58:14.359046  473269 start.go:353] cluster config:
	{Name:cert-expiration-304879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-304879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1121 14:58:14.364053  473269 out.go:179] * Starting "cert-expiration-304879" primary control-plane node in "cert-expiration-304879" cluster
	I1121 14:58:14.366847  473269 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:58:14.369659  473269 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:58:14.372559  473269 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:58:14.372608  473269 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 14:58:14.372614  473269 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:58:14.372627  473269 cache.go:65] Caching tarball of preloaded images
	I1121 14:58:14.372712  473269 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 14:58:14.372720  473269 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:58:14.372825  473269 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/config.json ...
	I1121 14:58:14.393131  473269 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:58:14.393142  473269 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:58:14.393161  473269 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:58:14.393182  473269 start.go:360] acquireMachinesLock for cert-expiration-304879: {Name:mkc4329526e107d36fb9171724418356adab2e02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:14.393256  473269 start.go:364] duration metric: took 48.189µs to acquireMachinesLock for "cert-expiration-304879"
	I1121 14:58:14.393277  473269 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:58:14.393281  473269 fix.go:54] fixHost starting: 
	I1121 14:58:14.393568  473269 cli_runner.go:164] Run: docker container inspect cert-expiration-304879 --format={{.State.Status}}
	I1121 14:58:14.410195  473269 fix.go:112] recreateIfNeeded on cert-expiration-304879: state=Running err=<nil>
	W1121 14:58:14.410215  473269 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:58:14.413540  473269 out.go:252] * Updating the running docker "cert-expiration-304879" container ...
	I1121 14:58:14.413567  473269 machine.go:94] provisionDockerMachine start ...
	I1121 14:58:14.413732  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:14.431809  473269 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:14.432120  473269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1121 14:58:14.432127  473269 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:58:14.577162  473269 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-304879
	
	I1121 14:58:14.577175  473269 ubuntu.go:182] provisioning hostname "cert-expiration-304879"
	I1121 14:58:14.577239  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:14.596587  473269 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:14.596921  473269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1121 14:58:14.596931  473269 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-304879 && echo "cert-expiration-304879" | sudo tee /etc/hostname
	I1121 14:58:14.758722  473269 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-304879
	
	I1121 14:58:14.758790  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:14.778629  473269 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:14.778943  473269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1121 14:58:14.778959  473269 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-304879' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-304879/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-304879' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:58:14.924723  473269 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:58:14.924737  473269 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 14:58:14.924763  473269 ubuntu.go:190] setting up certificates
	I1121 14:58:14.924772  473269 provision.go:84] configureAuth start
	I1121 14:58:14.924837  473269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-304879
	I1121 14:58:14.944004  473269 provision.go:143] copyHostCerts
	I1121 14:58:14.944061  473269 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 14:58:14.944077  473269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 14:58:14.944160  473269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 14:58:14.944255  473269 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 14:58:14.944261  473269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 14:58:14.944286  473269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 14:58:14.944334  473269 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 14:58:14.944337  473269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 14:58:14.944359  473269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 14:58:14.944583  473269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-304879 san=[127.0.0.1 192.168.85.2 cert-expiration-304879 localhost minikube]
	I1121 14:58:15.085163  473269 provision.go:177] copyRemoteCerts
	I1121 14:58:15.085244  473269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:58:15.085294  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:15.105660  473269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/cert-expiration-304879/id_rsa Username:docker}
	I1121 14:58:15.214982  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:58:15.237539  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:58:15.259717  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1121 14:58:15.287235  473269 provision.go:87] duration metric: took 362.449161ms to configureAuth
	I1121 14:58:15.287252  473269 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:58:15.287433  473269 config.go:182] Loaded profile config "cert-expiration-304879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:58:15.287548  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:15.305232  473269 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:15.305543  473269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1121 14:58:15.305555  473269 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:58:20.718874  473269 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:58:20.718887  473269 machine.go:97] duration metric: took 6.30531341s to provisionDockerMachine
	I1121 14:58:20.718897  473269 start.go:293] postStartSetup for "cert-expiration-304879" (driver="docker")
	I1121 14:58:20.718922  473269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:58:20.718997  473269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:58:20.719061  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:20.738351  473269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/cert-expiration-304879/id_rsa Username:docker}
	I1121 14:58:20.841845  473269 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:58:20.845680  473269 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:58:20.845697  473269 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:58:20.845706  473269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 14:58:20.845763  473269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 14:58:20.845840  473269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 14:58:20.845937  473269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:58:20.853799  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:58:20.872681  473269 start.go:296] duration metric: took 153.769151ms for postStartSetup
	I1121 14:58:20.872769  473269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:58:20.872824  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:20.890471  473269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/cert-expiration-304879/id_rsa Username:docker}
	I1121 14:58:20.990077  473269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:58:20.995473  473269 fix.go:56] duration metric: took 6.602182431s for fixHost
	I1121 14:58:20.995499  473269 start.go:83] releasing machines lock for "cert-expiration-304879", held for 6.602235256s
	I1121 14:58:20.995597  473269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-304879
	I1121 14:58:21.015805  473269 ssh_runner.go:195] Run: cat /version.json
	I1121 14:58:21.015848  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:21.016132  473269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:58:21.016178  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:21.041472  473269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/cert-expiration-304879/id_rsa Username:docker}
	I1121 14:58:21.043107  473269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/cert-expiration-304879/id_rsa Username:docker}
	I1121 14:58:21.260778  473269 ssh_runner.go:195] Run: systemctl --version
	I1121 14:58:21.268328  473269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:58:21.326990  473269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:58:21.332237  473269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:58:21.332315  473269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:58:21.342279  473269 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:58:21.342293  473269 start.go:496] detecting cgroup driver to use...
	I1121 14:58:21.342323  473269 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:58:21.342368  473269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:58:21.359157  473269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:58:21.373036  473269 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:58:21.373089  473269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:58:21.389572  473269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:58:21.405208  473269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:58:21.555472  473269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:58:21.695697  473269 docker.go:234] disabling docker service ...
	I1121 14:58:21.695787  473269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:58:21.711562  473269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:58:21.724586  473269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:58:21.867482  473269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:58:22.001924  473269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:58:22.020329  473269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:58:22.045745  473269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:58:22.045804  473269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.055979  473269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 14:58:22.056053  473269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.065599  473269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.076052  473269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.086199  473269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:58:22.096316  473269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.106242  473269 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.116055  473269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.126177  473269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:58:22.134959  473269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:58:22.143693  473269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:58:22.280240  473269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:58:22.491298  473269 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:58:22.491356  473269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:58:22.495364  473269 start.go:564] Will wait 60s for crictl version
	I1121 14:58:22.495416  473269 ssh_runner.go:195] Run: which crictl
	I1121 14:58:22.498874  473269 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:58:22.534104  473269 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:58:22.534188  473269 ssh_runner.go:195] Run: crio --version
	I1121 14:58:22.566999  473269 ssh_runner.go:195] Run: crio --version
	I1121 14:58:22.605698  473269 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:58:22.608597  473269 cli_runner.go:164] Run: docker network inspect cert-expiration-304879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:58:22.626087  473269 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:58:22.630333  473269 kubeadm.go:884] updating cluster {Name:cert-expiration-304879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-304879 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:58:22.630443  473269 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:58:22.630496  473269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:58:22.665974  473269 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:58:22.665986  473269 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:58:22.666050  473269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:58:22.698549  473269 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:58:22.698560  473269 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:58:22.698567  473269 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:58:22.698678  473269 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-304879 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-304879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:58:22.698762  473269 ssh_runner.go:195] Run: crio config
	I1121 14:58:22.775315  473269 cni.go:84] Creating CNI manager for ""
	I1121 14:58:22.775325  473269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:58:22.775345  473269 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:58:22.775365  473269 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-304879 NodeName:cert-expiration-304879 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:58:22.775483  473269 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-304879"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:58:22.775555  473269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:58:22.784672  473269 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:58:22.784732  473269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:58:22.792630  473269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1121 14:58:22.806591  473269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:58:22.820119  473269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1121 14:58:22.834025  473269 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:58:22.838197  473269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:58:22.985749  473269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:58:22.999918  473269 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879 for IP: 192.168.85.2
	I1121 14:58:22.999929  473269 certs.go:195] generating shared ca certs ...
	I1121 14:58:22.999958  473269 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:23.000091  473269 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 14:58:23.000127  473269 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 14:58:23.000140  473269 certs.go:257] generating profile certs ...
	W1121 14:58:23.000261  473269 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1121 14:58:23.000282  473269 certs.go:624] cert expired /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.crt: expiration: 2025-11-21 14:57:48 +0000 UTC, now: 2025-11-21 14:58:23.000278419 +0000 UTC m=+8.892127495
	I1121 14:58:23.000466  473269 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.key
	I1121 14:58:23.000482  473269 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.crt with IP's: []
	I1121 14:58:23.520970  473269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.crt ...
	I1121 14:58:23.520992  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.crt: {Name:mke2b8018d1a8373f075edbdacbcd94b71d4da71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:23.521134  473269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.key ...
	I1121 14:58:23.521141  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.key: {Name:mk4b26b1af3c295df7694a5711739a9d1afb273a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1121 14:58:23.521314  473269 out.go:285] ! Certificate apiserver.crt.ecf903fc has expired. Generating a new one...
	I1121 14:58:23.521379  473269 certs.go:624] cert expired /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt.ecf903fc: expiration: 2025-11-21 14:57:49 +0000 UTC, now: 2025-11-21 14:58:23.521372647 +0000 UTC m=+9.413221739
	I1121 14:58:23.521492  473269 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.key.ecf903fc
	I1121 14:58:23.521516  473269 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt.ecf903fc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:58:25.011877  473269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt.ecf903fc ...
	I1121 14:58:25.011897  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt.ecf903fc: {Name:mk15f56940687d3aee352b3d3952d5f085991d37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:25.012105  473269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.key.ecf903fc ...
	I1121 14:58:25.012115  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.key.ecf903fc: {Name:mke70d857edecfcfa1a7fc527b901027097f84fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:25.012187  473269 certs.go:382] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt.ecf903fc -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt
	I1121 14:58:25.012344  473269 certs.go:386] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.key.ecf903fc -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.key
	W1121 14:58:25.012765  473269 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1121 14:58:25.012838  473269 certs.go:624] cert expired /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.crt: expiration: 2025-11-21 14:57:49 +0000 UTC, now: 2025-11-21 14:58:25.012831829 +0000 UTC m=+10.904680921
	I1121 14:58:25.012945  473269 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.key
	I1121 14:58:25.012961  473269 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.crt with IP's: []
	I1121 14:58:25.551536  473269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.crt ...
	I1121 14:58:25.551552  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.crt: {Name:mk2015691dcb0f8957b0976b65479b65cc5d3d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:25.551705  473269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.key ...
	I1121 14:58:25.551711  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.key: {Name:mkac077691b605a4432386d18e754befab01c180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:25.551868  473269 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 14:58:25.551903  473269 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 14:58:25.551911  473269 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:58:25.551932  473269 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:58:25.551954  473269 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:58:25.551974  473269 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 14:58:25.552015  473269 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:58:25.557779  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:58:25.625113  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:58:25.673115  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:58:25.736228  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:58:25.802280  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1121 14:58:25.839119  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:58:25.864983  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:58:25.897231  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:58:25.970541  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:58:26.011099  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 14:58:26.038968  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 14:58:26.074655  473269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:58:26.113834  473269 ssh_runner.go:195] Run: openssl version
	I1121 14:58:26.133673  473269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:58:26.162595  473269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:58:26.175083  473269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:58:26.175155  473269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:58:26.279984  473269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:58:26.307906  473269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 14:58:26.325007  473269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 14:58:26.329837  473269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 14:58:26.329916  473269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 14:58:26.407554  473269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 14:58:26.418935  473269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 14:58:26.427872  473269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 14:58:26.432856  473269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 14:58:26.432919  473269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 14:58:26.477235  473269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:58:26.487263  473269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:58:26.494837  473269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:58:26.549825  473269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:58:26.612760  473269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:58:26.712411  473269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:58:26.758588  473269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:58:26.830597  473269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:58:26.930869  473269 kubeadm.go:401] StartCluster: {Name:cert-expiration-304879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-304879 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:58:26.930964  473269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:58:26.931033  473269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:58:26.989255  473269 cri.go:89] found id: "f1ed67b102f2ac7bb408e7d8fa839a1444c81e4f881c2a630c966e560d98298b"
	I1121 14:58:26.989266  473269 cri.go:89] found id: "d801b53d8a136fd19b41efcb1e378d61d82510c429834c8e513ca7680a5b6ca4"
	I1121 14:58:26.989270  473269 cri.go:89] found id: "3739ffdcdcda6613176b8c40d35d3c33d50781ba7753628e57558ac860890034"
	I1121 14:58:26.989272  473269 cri.go:89] found id: "28b517f01a5aba3d33fd5f70282d159b50ef6966b14985fa215f6e30ad126e27"
	I1121 14:58:26.989274  473269 cri.go:89] found id: "fb8046c8b0ad46820ce514b73486d5e2fa4988e0136a17d3364ba750f5c37b58"
	I1121 14:58:26.989277  473269 cri.go:89] found id: "aa754288a0b3fcb3fa562f8c2093216eccc54986d8bad4c045324146168063f6"
	I1121 14:58:26.989280  473269 cri.go:89] found id: "11149f54f89b70c9fdeab321afa751e70b1946418c94db977733f1c16d1a5229"
	I1121 14:58:26.989282  473269 cri.go:89] found id: "079513ee1f3b9d029e0d529e5d92ea6f2068de1eecc790b6785a88d2220c39bb"
	I1121 14:58:26.989284  473269 cri.go:89] found id: "fd2fd6dc4b52c638b2a21f506fce4a46733b6b3e7507849dc8eadd6f472f1a82"
	I1121 14:58:26.989291  473269 cri.go:89] found id: "0589701de466d28ce07e223a018a66765f6e4273e402fe86abfeb2e573fc284c"
	I1121 14:58:26.989294  473269 cri.go:89] found id: "898b2db1e294dd813b2b6dbe0d5041c745641e32e01361ac3718a5aea64aa597"
	I1121 14:58:26.989296  473269 cri.go:89] found id: "fda7a21283fbce0af0fffca6465a8ba05057f52a223ad40cbda8a6436b2d6a6c"
	I1121 14:58:26.989298  473269 cri.go:89] found id: "f69aead55eba628672f1b35898ef434110bba356efd2df03b48f98f844648456"
	I1121 14:58:26.989300  473269 cri.go:89] found id: "10de35a2b1c7ad6881fb5756388e3b102104adfee8666f7edf23203753199e59"
	I1121 14:58:26.989302  473269 cri.go:89] found id: "72780083dda6774fb0157c5b64a1c8bbc857f5beee7b4097feceb59929cbda4c"
	I1121 14:58:26.989305  473269 cri.go:89] found id: "9bed70e6e05fe0e8a0f3f7df43c49278d743cfdbd7d613a9aff93210b37b23cd"
	I1121 14:58:26.989309  473269 cri.go:89] found id: ""
	I1121 14:58:26.989361  473269 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:58:27.010619  473269 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:58:27Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:58:27.010716  473269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:58:27.023447  473269 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:58:27.023456  473269 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:58:27.023514  473269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:58:27.038294  473269 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:58:27.039054  473269 kubeconfig.go:125] found "cert-expiration-304879" server: "https://192.168.85.2:8443"
	I1121 14:58:27.040870  473269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:58:27.053708  473269 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 14:58:27.053731  473269 kubeadm.go:602] duration metric: took 30.271001ms to restartPrimaryControlPlane
	I1121 14:58:27.053738  473269 kubeadm.go:403] duration metric: took 122.88132ms to StartCluster
	I1121 14:58:27.053758  473269 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:27.053833  473269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:58:27.054806  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:27.055055  473269 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:58:27.055421  473269 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:58:27.055503  473269 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-304879"
	I1121 14:58:27.055518  473269 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-304879"
	W1121 14:58:27.055523  473269 addons.go:248] addon storage-provisioner should already be in state true
	I1121 14:58:27.055551  473269 host.go:66] Checking if "cert-expiration-304879" exists ...
	I1121 14:58:27.056045  473269 cli_runner.go:164] Run: docker container inspect cert-expiration-304879 --format={{.State.Status}}
	I1121 14:58:27.056422  473269 config.go:182] Loaded profile config "cert-expiration-304879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:58:27.056506  473269 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-304879"
	I1121 14:58:27.056518  473269 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-304879"
	I1121 14:58:27.056790  473269 cli_runner.go:164] Run: docker container inspect cert-expiration-304879 --format={{.State.Status}}
	I1121 14:58:27.067921  473269 out.go:179] * Verifying Kubernetes components...
	I1121 14:58:27.073006  473269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:58:27.101514  473269 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-304879"
	W1121 14:58:27.101548  473269 addons.go:248] addon default-storageclass should already be in state true
	I1121 14:58:27.101580  473269 host.go:66] Checking if "cert-expiration-304879" exists ...
	I1121 14:58:27.102025  473269 cli_runner.go:164] Run: docker container inspect cert-expiration-304879 --format={{.State.Status}}
	I1121 14:58:27.102203  473269 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.350063067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.35737289Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.358105125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.383716305Z" level=info msg="Created container ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p/dashboard-metrics-scraper" id=ce10e2ce-8d9b-4e70-a97a-e30600562496 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.385111555Z" level=info msg="Starting container: ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc" id=045bb335-8b2a-49d9-934b-b29f447f74f6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.386763554Z" level=info msg="Started container" PID=1642 containerID=ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p/dashboard-metrics-scraper id=045bb335-8b2a-49d9-934b-b29f447f74f6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3e99fdc6d8eddff0cd7a21cd0ae839da72d82ce29a0f73bef295a8dc01f13b6
	Nov 21 14:58:12 old-k8s-version-357479 conmon[1640]: conmon ab614e073a4e75fbacf5 <ninfo>: container 1642 exited with status 1
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.513509818Z" level=info msg="Removing container: 054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad" id=75cfad08-02bd-4331-bfd2-927a7619e672 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.529615087Z" level=info msg="Error loading conmon cgroup of container 054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad: cgroup deleted" id=75cfad08-02bd-4331-bfd2-927a7619e672 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.532844566Z" level=info msg="Removed container 054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p/dashboard-metrics-scraper" id=75cfad08-02bd-4331-bfd2-927a7619e672 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.214420835Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.221014933Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.221185257Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.221263839Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.224514224Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.224548793Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.224573991Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.227643173Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.22767583Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.227701094Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.231645149Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.231679989Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.231702446Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.234651397Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.234686293Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	ab614e073a4e7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   d3e99fdc6d8ed       dashboard-metrics-scraper-5f989dc9cf-trz6p       kubernetes-dashboard
	30b10a3404358       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   79ebd5fbfab7f       storage-provisioner                              kube-system
	4265cf599c2b9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   381404616efc4       kubernetes-dashboard-8694d4445c-87tjm            kubernetes-dashboard
	8a8d5631e51ca       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           53 seconds ago      Running             coredns                     1                   bc18d744650bf       coredns-5dd5756b68-xt9qp                         kube-system
	e957f0820679b       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   2ca0434dbf064       busybox                                          default
	a7380a6949acd       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           53 seconds ago      Running             kube-proxy                  1                   778aeef0c5ee1       kube-proxy-f2r9z                                 kube-system
	ea49e692a21cb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago      Running             kindnet-cni                 1                   fb7317bf7d01e       kindnet-2bwt6                                    kube-system
	9c2c474dad29f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   79ebd5fbfab7f       storage-provisioner                              kube-system
	96c28d59af9dc       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           59 seconds ago      Running             etcd                        1                   3e4a8ed0f467f       etcd-old-k8s-version-357479                      kube-system
	069138de88a3a       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           59 seconds ago      Running             kube-scheduler              1                   03f25d6156f03       kube-scheduler-old-k8s-version-357479            kube-system
	53c68e0361bdc       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           59 seconds ago      Running             kube-controller-manager     1                   342cdf61269b8       kube-controller-manager-old-k8s-version-357479   kube-system
	b0b41441d2ebe       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           59 seconds ago      Running             kube-apiserver              1                   25411b0b338e3       kube-apiserver-old-k8s-version-357479            kube-system
	
	
	==> coredns [8a8d5631e51cacf53d506fa4a04f85f6215a24372cb5bc9461c77a553351e692] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38858 - 55604 "HINFO IN 4690593338861365075.4673154822576557002. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011210408s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-357479
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-357479
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=old-k8s-version-357479
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_56_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:56:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-357479
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:58:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:58:05 +0000   Fri, 21 Nov 2025 14:56:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:58:05 +0000   Fri, 21 Nov 2025 14:56:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:58:05 +0000   Fri, 21 Nov 2025 14:56:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:58:05 +0000   Fri, 21 Nov 2025 14:56:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-357479
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                ab0023cc-284b-4a0e-ae5e-24c43711c856
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-xt9qp                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-old-k8s-version-357479                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-2bwt6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-357479             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-357479    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-f2r9z                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-357479             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-trz6p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-87tjm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node old-k8s-version-357479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node old-k8s-version-357479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node old-k8s-version-357479 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node old-k8s-version-357479 event: Registered Node old-k8s-version-357479 in Controller
	  Normal  NodeReady                95s                kubelet          Node old-k8s-version-357479 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node old-k8s-version-357479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node old-k8s-version-357479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node old-k8s-version-357479 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node old-k8s-version-357479 event: Registered Node old-k8s-version-357479 in Controller
	
	
	==> dmesg <==
	[Nov21 14:33] kauditd_printk_skb: 8 callbacks suppressed
	[ +39.333625] overlayfs: idmapped layers are currently not supported
	[Nov21 14:34] overlayfs: idmapped layers are currently not supported
	[Nov21 14:35] overlayfs: idmapped layers are currently not supported
	[Nov21 14:36] overlayfs: idmapped layers are currently not supported
	[Nov21 14:37] overlayfs: idmapped layers are currently not supported
	[Nov21 14:39] overlayfs: idmapped layers are currently not supported
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [96c28d59af9dc044a74c9ec3836f37a7c38007a450b148fbbd3efe7dfe087216] <==
	{"level":"info","ts":"2025-11-21T14:57:30.445856Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:57:30.445885Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:57:30.446058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-21T14:57:30.447216Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-21T14:57:30.450068Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:57:30.450397Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:57:30.46191Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-21T14:57:30.462448Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-21T14:57:30.462607Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-21T14:57:30.47765Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-21T14:57:30.478406Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-21T14:57:32.184415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-21T14:57:32.184538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-21T14:57:32.184587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-21T14:57:32.184635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-21T14:57:32.184666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-21T14:57:32.1847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-21T14:57:32.184732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-21T14:57:32.186423Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-357479 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-21T14:57:32.186514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:57:32.1887Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-21T14:57:32.188775Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-21T14:57:32.188815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:57:32.189776Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-21T14:57:32.194298Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:58:29 up  2:41,  0 user,  load average: 1.85, 2.57, 2.38
	Linux old-k8s-version-357479 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ea49e692a21cb120384182095ffa391f2fb8bcf220d001779c9acdf6bc494b84] <==
	I1121 14:57:36.022893       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:57:36.023993       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 14:57:36.024143       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:57:36.024156       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:57:36.024171       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:57:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:57:36.211554       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:57:36.211571       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:57:36.211579       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:57:36.211952       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:58:06.211898       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 14:58:06.211898       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 14:58:06.211994       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 14:58:06.213262       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1121 14:58:07.612506       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:58:07.612535       1 metrics.go:72] Registering metrics
	I1121 14:58:07.612614       1 controller.go:711] "Syncing nftables rules"
	I1121 14:58:16.214056       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:58:16.214160       1 main.go:301] handling current node
	I1121 14:58:26.224636       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:58:26.224665       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b0b41441d2ebefe39ee2acf353a3ca206cd126618c88325038515b9d85d7f838] <==
	I1121 14:57:34.469325       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1121 14:57:34.469546       1 aggregator.go:166] initial CRD sync complete...
	I1121 14:57:34.469561       1 autoregister_controller.go:141] Starting autoregister controller
	I1121 14:57:34.469567       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:57:34.475952       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:57:34.501510       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1121 14:57:34.548431       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 14:57:34.563031       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1121 14:57:34.563903       1 shared_informer.go:318] Caches are synced for configmaps
	I1121 14:57:34.563985       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1121 14:57:34.564015       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1121 14:57:34.565293       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1121 14:57:34.577084       1 cache.go:39] Caches are synced for autoregister controller
	E1121 14:57:34.597096       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 14:57:35.153692       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:57:36.240781       1 controller.go:624] quota admission added evaluator for: namespaces
	I1121 14:57:36.284308       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1121 14:57:36.311909       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:57:36.325036       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:57:36.335437       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1121 14:57:36.408707       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.198.121"}
	I1121 14:57:36.455952       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.15.30"}
	I1121 14:57:47.131696       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:57:47.139837       1 controller.go:624] quota admission added evaluator for: endpoints
	I1121 14:57:47.169897       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [53c68e0361bdc5178c617b0c2656901eb2b40db3c48b84123beeedd42f17b52b] <==
	I1121 14:57:47.302981       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1121 14:57:47.303120       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-357479"
	I1121 14:57:47.303217       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1121 14:57:47.303284       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1121 14:57:47.303353       1 taint_manager.go:211] "Sending events to api server"
	I1121 14:57:47.304195       1 event.go:307] "Event occurred" object="old-k8s-version-357479" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-357479 event: Registered Node old-k8s-version-357479 in Controller"
	I1121 14:57:47.307691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.899014ms"
	I1121 14:57:47.307938       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.193µs"
	I1121 14:57:47.313001       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:57:47.321910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.629µs"
	I1121 14:57:47.325325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="30.36465ms"
	I1121 14:57:47.325501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="62.155µs"
	I1121 14:57:47.342769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="257.431µs"
	I1121 14:57:47.658356       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:57:47.676783       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:57:47.676817       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1121 14:57:53.466323       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.014µs"
	I1121 14:57:54.476540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.675µs"
	I1121 14:57:55.477771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.249µs"
	I1121 14:57:58.493715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.824158ms"
	I1121 14:57:58.493898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.646µs"
	I1121 14:58:11.162382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.023096ms"
	I1121 14:58:11.162514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.052µs"
	I1121 14:58:12.527199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.431µs"
	I1121 14:58:19.072665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.766µs"
	
	
	==> kube-proxy [a7380a6949acd872625d6f9c045103c7364a54a4fa0520562623923d643e8d9d] <==
	I1121 14:57:35.978630       1 server_others.go:69] "Using iptables proxy"
	I1121 14:57:36.023719       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1121 14:57:36.046494       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:57:36.048332       1 server_others.go:152] "Using iptables Proxier"
	I1121 14:57:36.048476       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1121 14:57:36.048512       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1121 14:57:36.048561       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1121 14:57:36.049191       1 server.go:846] "Version info" version="v1.28.0"
	I1121 14:57:36.049705       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:57:36.054344       1 config.go:188] "Starting service config controller"
	I1121 14:57:36.054419       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1121 14:57:36.054466       1 config.go:97] "Starting endpoint slice config controller"
	I1121 14:57:36.054493       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1121 14:57:36.054908       1 config.go:315] "Starting node config controller"
	I1121 14:57:36.054954       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1121 14:57:36.155212       1 shared_informer.go:318] Caches are synced for service config
	I1121 14:57:36.155256       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1121 14:57:36.155538       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [069138de88a3a043bcedc2015132da995f1f67098719cfcccc8ad8edadcf1c6b] <==
	W1121 14:57:34.456312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1121 14:57:34.456338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1121 14:57:34.456436       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1121 14:57:34.456447       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1121 14:57:34.456519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1121 14:57:34.456537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1121 14:57:34.456606       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1121 14:57:34.456622       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1121 14:57:34.456693       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1121 14:57:34.456708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1121 14:57:34.456776       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1121 14:57:34.456791       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1121 14:57:34.456856       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1121 14:57:34.456871       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1121 14:57:34.456922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1121 14:57:34.456938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1121 14:57:34.457000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1121 14:57:34.457014       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1121 14:57:34.457064       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1121 14:57:34.457080       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1121 14:57:34.457128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1121 14:57:34.457142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1121 14:57:34.457365       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1121 14:57:34.457384       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1121 14:57:35.976487       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 21 14:57:48 old-k8s-version-357479 kubelet[782]: E1121 14:57:48.503557     782 projected.go:198] Error preparing data for projected volume kube-api-access-bvttv for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-87tjm: failed to sync configmap cache: timed out waiting for the condition
	Nov 21 14:57:48 old-k8s-version-357479 kubelet[782]: E1121 14:57:48.503695     782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc172b90-47f0-4d9f-a696-97f474da198a-kube-api-access-bvttv podName:cc172b90-47f0-4d9f-a696-97f474da198a nodeName:}" failed. No retries permitted until 2025-11-21 14:57:49.003669735 +0000 UTC m=+19.896545760 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bvttv" (UniqueName: "kubernetes.io/projected/cc172b90-47f0-4d9f-a696-97f474da198a-kube-api-access-bvttv") pod "kubernetes-dashboard-8694d4445c-87tjm" (UID: "cc172b90-47f0-4d9f-a696-97f474da198a") : failed to sync configmap cache: timed out waiting for the condition
	Nov 21 14:57:48 old-k8s-version-357479 kubelet[782]: E1121 14:57:48.503060     782 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 21 14:57:48 old-k8s-version-357479 kubelet[782]: E1121 14:57:48.503737     782 projected.go:198] Error preparing data for projected volume kube-api-access-t9lj7 for pod kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p: failed to sync configmap cache: timed out waiting for the condition
	Nov 21 14:57:48 old-k8s-version-357479 kubelet[782]: E1121 14:57:48.503767     782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0dc8207b-428e-4ee8-80f6-41913c5d6bc8-kube-api-access-t9lj7 podName:0dc8207b-428e-4ee8-80f6-41913c5d6bc8 nodeName:}" failed. No retries permitted until 2025-11-21 14:57:49.003756981 +0000 UTC m=+19.896633007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t9lj7" (UniqueName: "kubernetes.io/projected/0dc8207b-428e-4ee8-80f6-41913c5d6bc8-kube-api-access-t9lj7") pod "dashboard-metrics-scraper-5f989dc9cf-trz6p" (UID: "0dc8207b-428e-4ee8-80f6-41913c5d6bc8") : failed to sync configmap cache: timed out waiting for the condition
	Nov 21 14:57:49 old-k8s-version-357479 kubelet[782]: W1121 14:57:49.100276     782 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/crio-381404616efc4fa79c3770433ac58b1107bad9995c51dac9fa1b393b29cdab15 WatchSource:0}: Error finding container 381404616efc4fa79c3770433ac58b1107bad9995c51dac9fa1b393b29cdab15: Status 404 returned error can't find the container with id 381404616efc4fa79c3770433ac58b1107bad9995c51dac9fa1b393b29cdab15
	Nov 21 14:57:53 old-k8s-version-357479 kubelet[782]: I1121 14:57:53.447575     782 scope.go:117] "RemoveContainer" containerID="9cc63b663bc90b3fc035a18bd74f5ca123110167ce66f48ff555580a271a5357"
	Nov 21 14:57:54 old-k8s-version-357479 kubelet[782]: I1121 14:57:54.451631     782 scope.go:117] "RemoveContainer" containerID="9cc63b663bc90b3fc035a18bd74f5ca123110167ce66f48ff555580a271a5357"
	Nov 21 14:57:54 old-k8s-version-357479 kubelet[782]: I1121 14:57:54.451988     782 scope.go:117] "RemoveContainer" containerID="054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad"
	Nov 21 14:57:54 old-k8s-version-357479 kubelet[782]: E1121 14:57:54.452261     782 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-trz6p_kubernetes-dashboard(0dc8207b-428e-4ee8-80f6-41913c5d6bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p" podUID="0dc8207b-428e-4ee8-80f6-41913c5d6bc8"
	Nov 21 14:57:55 old-k8s-version-357479 kubelet[782]: I1121 14:57:55.456557     782 scope.go:117] "RemoveContainer" containerID="054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad"
	Nov 21 14:57:55 old-k8s-version-357479 kubelet[782]: E1121 14:57:55.456846     782 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-trz6p_kubernetes-dashboard(0dc8207b-428e-4ee8-80f6-41913c5d6bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p" podUID="0dc8207b-428e-4ee8-80f6-41913c5d6bc8"
	Nov 21 14:57:59 old-k8s-version-357479 kubelet[782]: I1121 14:57:59.053405     782 scope.go:117] "RemoveContainer" containerID="054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad"
	Nov 21 14:57:59 old-k8s-version-357479 kubelet[782]: E1121 14:57:59.053745     782 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-trz6p_kubernetes-dashboard(0dc8207b-428e-4ee8-80f6-41913c5d6bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p" podUID="0dc8207b-428e-4ee8-80f6-41913c5d6bc8"
	Nov 21 14:58:06 old-k8s-version-357479 kubelet[782]: I1121 14:58:06.488038     782 scope.go:117] "RemoveContainer" containerID="9c2c474dad29f36153907aecb633e2ce285822491863ba62f7db3147f7a895c6"
	Nov 21 14:58:06 old-k8s-version-357479 kubelet[782]: I1121 14:58:06.523434     782 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-87tjm" podStartSLOduration=10.738594149 podCreationTimestamp="2025-11-21 14:57:47 +0000 UTC" firstStartedPulling="2025-11-21 14:57:49.104950657 +0000 UTC m=+19.997826683" lastFinishedPulling="2025-11-21 14:57:57.885886037 +0000 UTC m=+28.778762063" observedRunningTime="2025-11-21 14:57:58.479656777 +0000 UTC m=+29.372532803" watchObservedRunningTime="2025-11-21 14:58:06.519529529 +0000 UTC m=+37.412405555"
	Nov 21 14:58:12 old-k8s-version-357479 kubelet[782]: I1121 14:58:12.345373     782 scope.go:117] "RemoveContainer" containerID="054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad"
	Nov 21 14:58:12 old-k8s-version-357479 kubelet[782]: I1121 14:58:12.505566     782 scope.go:117] "RemoveContainer" containerID="054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad"
	Nov 21 14:58:12 old-k8s-version-357479 kubelet[782]: I1121 14:58:12.505846     782 scope.go:117] "RemoveContainer" containerID="ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc"
	Nov 21 14:58:12 old-k8s-version-357479 kubelet[782]: E1121 14:58:12.506135     782 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-trz6p_kubernetes-dashboard(0dc8207b-428e-4ee8-80f6-41913c5d6bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p" podUID="0dc8207b-428e-4ee8-80f6-41913c5d6bc8"
	Nov 21 14:58:19 old-k8s-version-357479 kubelet[782]: I1121 14:58:19.053434     782 scope.go:117] "RemoveContainer" containerID="ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc"
	Nov 21 14:58:19 old-k8s-version-357479 kubelet[782]: E1121 14:58:19.054253     782 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-trz6p_kubernetes-dashboard(0dc8207b-428e-4ee8-80f6-41913c5d6bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p" podUID="0dc8207b-428e-4ee8-80f6-41913c5d6bc8"
	Nov 21 14:58:26 old-k8s-version-357479 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:58:26 old-k8s-version-357479 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:58:26 old-k8s-version-357479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4265cf599c2b9bd90aebc621ed98272108d9ad03647acec31d485ea27a3b7d54] <==
	2025/11/21 14:57:57 Using namespace: kubernetes-dashboard
	2025/11/21 14:57:57 Using in-cluster config to connect to apiserver
	2025/11/21 14:57:57 Using secret token for csrf signing
	2025/11/21 14:57:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 14:57:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 14:57:57 Successful initial request to the apiserver, version: v1.28.0
	2025/11/21 14:57:57 Generating JWE encryption key
	2025/11/21 14:57:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 14:57:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 14:57:58 Initializing JWE encryption key from synchronized object
	2025/11/21 14:57:58 Creating in-cluster Sidecar client
	2025/11/21 14:57:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:57:58 Serving insecurely on HTTP port: 9090
	2025/11/21 14:58:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:57:57 Starting overwatch
	
	
	==> storage-provisioner [30b10a34043583ba1f78c5ab76dd01d93f62c25e440b077d1355ec25ade82c83] <==
	I1121 14:58:06.537528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:58:06.558733       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:58:06.558805       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1121 14:58:23.970266       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:58:23.973376       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-357479_4743d8ba-f91e-4abc-86ce-9934fb3f9dd7!
	I1121 14:58:23.974002       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c725c8a8-b445-4999-8998-79842ae68d9e", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-357479_4743d8ba-f91e-4abc-86ce-9934fb3f9dd7 became leader
	I1121 14:58:24.074989       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-357479_4743d8ba-f91e-4abc-86ce-9934fb3f9dd7!
	
	
	==> storage-provisioner [9c2c474dad29f36153907aecb633e2ce285822491863ba62f7db3147f7a895c6] <==
	I1121 14:57:35.888723       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 14:58:05.891023       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-357479 -n old-k8s-version-357479
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-357479 -n old-k8s-version-357479: exit status 2 (517.0466ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-357479 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-357479
helpers_test.go:243: (dbg) docker inspect old-k8s-version-357479:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19",
	        "Created": "2025-11-21T14:56:00.807071627Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471267,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:57:22.002311884Z",
	            "FinishedAt": "2025-11-21T14:57:21.1839804Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/hostname",
	        "HostsPath": "/var/lib/docker/containers/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/hosts",
	        "LogPath": "/var/lib/docker/containers/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19-json.log",
	        "Name": "/old-k8s-version-357479",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-357479:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-357479",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19",
	                "LowerDir": "/var/lib/docker/overlay2/4b4d0ac394452156ec3837780780e8daabe7e0050a0fe74add3a28c8e62b67e7-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b4d0ac394452156ec3837780780e8daabe7e0050a0fe74add3a28c8e62b67e7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b4d0ac394452156ec3837780780e8daabe7e0050a0fe74add3a28c8e62b67e7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b4d0ac394452156ec3837780780e8daabe7e0050a0fe74add3a28c8e62b67e7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-357479",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-357479/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-357479",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-357479",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-357479",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df2ffe6c5cb4cec39d0dba2fcdb1540b2cab1660d707ca671a4e9cad3964f034",
	            "SandboxKey": "/var/run/docker/netns/df2ffe6c5cb4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-357479": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:26:1b:05:4f:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7422cf90b3020c38636ee057796200a23d8dcb6121ac9112e0ea63b06e8fa49d",
	                    "EndpointID": "744a4154c6562745873f08157e931939e90e7a1a8ee8aab3d7a8942a1008efaa",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-357479",
	                        "0fe519ab5875"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-357479 -n old-k8s-version-357479
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-357479 -n old-k8s-version-357479: exit status 2 (509.010326ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-357479 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-357479 logs -n 25: (1.877027946s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-609503 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo containerd config dump                                                                                                                                                                                                  │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo crio config                                                                                                                                                                                                             │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ delete  │ -p cilium-609503                                                                                                                                                                                                                              │ cilium-609503            │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │ 21 Nov 25 14:54 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-304879   │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │ 21 Nov 25 14:55 UTC │
	│ delete  │ -p force-systemd-env-360486                                                                                                                                                                                                                   │ force-systemd-env-360486 │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ start   │ -p cert-options-605096 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-605096      │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ ssh     │ cert-options-605096 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-605096      │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ ssh     │ -p cert-options-605096 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-605096      │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ delete  │ -p cert-options-605096                                                                                                                                                                                                                        │ cert-options-605096      │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-357479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │                     │
	│ stop    │ -p old-k8s-version-357479 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-357479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:57 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-304879   │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │                     │
	│ image   │ old-k8s-version-357479 image list --format=json                                                                                                                                                                                               │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ pause   │ -p old-k8s-version-357479 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-357479   │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:58:14
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:58:14.154165  473269 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:58:14.154285  473269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:58:14.154289  473269 out.go:374] Setting ErrFile to fd 2...
	I1121 14:58:14.154293  473269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:58:14.154557  473269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:58:14.154917  473269 out.go:368] Setting JSON to false
	I1121 14:58:14.155955  473269 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9646,"bootTime":1763727448,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 14:58:14.156015  473269 start.go:143] virtualization:  
	I1121 14:58:14.159746  473269 out.go:179] * [cert-expiration-304879] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:58:14.163644  473269 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:58:14.163720  473269 notify.go:221] Checking for updates...
	I1121 14:58:14.167907  473269 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:58:14.170876  473269 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:58:14.173720  473269 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 14:58:14.176679  473269 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:58:14.179700  473269 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:58:14.183125  473269 config.go:182] Loaded profile config "cert-expiration-304879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:58:14.183655  473269 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:58:14.216773  473269 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:58:14.216882  473269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:58:14.285030  473269 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 14:58:14.274651669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:58:14.285130  473269 docker.go:319] overlay module found
	I1121 14:58:14.288216  473269 out.go:179] * Using the docker driver based on existing profile
	I1121 14:58:14.291132  473269 start.go:309] selected driver: docker
	I1121 14:58:14.291143  473269 start.go:930] validating driver "docker" against &{Name:cert-expiration-304879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-304879 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:58:14.291220  473269 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:58:14.291986  473269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:58:14.358635  473269 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 14:58:14.34900988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:58:14.358934  473269 cni.go:84] Creating CNI manager for ""
	I1121 14:58:14.358987  473269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:58:14.359046  473269 start.go:353] cluster config:
	{Name:cert-expiration-304879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-304879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1121 14:58:14.364053  473269 out.go:179] * Starting "cert-expiration-304879" primary control-plane node in "cert-expiration-304879" cluster
	I1121 14:58:14.366847  473269 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:58:14.369659  473269 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:58:14.372559  473269 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:58:14.372608  473269 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 14:58:14.372614  473269 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:58:14.372627  473269 cache.go:65] Caching tarball of preloaded images
	I1121 14:58:14.372712  473269 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 14:58:14.372720  473269 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:58:14.372825  473269 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/config.json ...
	I1121 14:58:14.393131  473269 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:58:14.393142  473269 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:58:14.393161  473269 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:58:14.393182  473269 start.go:360] acquireMachinesLock for cert-expiration-304879: {Name:mkc4329526e107d36fb9171724418356adab2e02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:14.393256  473269 start.go:364] duration metric: took 48.189µs to acquireMachinesLock for "cert-expiration-304879"
	I1121 14:58:14.393277  473269 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:58:14.393281  473269 fix.go:54] fixHost starting: 
	I1121 14:58:14.393568  473269 cli_runner.go:164] Run: docker container inspect cert-expiration-304879 --format={{.State.Status}}
	I1121 14:58:14.410195  473269 fix.go:112] recreateIfNeeded on cert-expiration-304879: state=Running err=<nil>
	W1121 14:58:14.410215  473269 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:58:14.413540  473269 out.go:252] * Updating the running docker "cert-expiration-304879" container ...
	I1121 14:58:14.413567  473269 machine.go:94] provisionDockerMachine start ...
	I1121 14:58:14.413732  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:14.431809  473269 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:14.432120  473269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1121 14:58:14.432127  473269 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:58:14.577162  473269 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-304879
	
	I1121 14:58:14.577175  473269 ubuntu.go:182] provisioning hostname "cert-expiration-304879"
	I1121 14:58:14.577239  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:14.596587  473269 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:14.596921  473269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1121 14:58:14.596931  473269 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-304879 && echo "cert-expiration-304879" | sudo tee /etc/hostname
	I1121 14:58:14.758722  473269 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-304879
	
	I1121 14:58:14.758790  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:14.778629  473269 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:14.778943  473269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1121 14:58:14.778959  473269 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-304879' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-304879/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-304879' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:58:14.924723  473269 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:58:14.924737  473269 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 14:58:14.924763  473269 ubuntu.go:190] setting up certificates
	I1121 14:58:14.924772  473269 provision.go:84] configureAuth start
	I1121 14:58:14.924837  473269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-304879
	I1121 14:58:14.944004  473269 provision.go:143] copyHostCerts
	I1121 14:58:14.944061  473269 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 14:58:14.944077  473269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 14:58:14.944160  473269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 14:58:14.944255  473269 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 14:58:14.944261  473269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 14:58:14.944286  473269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 14:58:14.944334  473269 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 14:58:14.944337  473269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 14:58:14.944359  473269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 14:58:14.944583  473269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-304879 san=[127.0.0.1 192.168.85.2 cert-expiration-304879 localhost minikube]
	I1121 14:58:15.085163  473269 provision.go:177] copyRemoteCerts
	I1121 14:58:15.085244  473269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:58:15.085294  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:15.105660  473269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/cert-expiration-304879/id_rsa Username:docker}
	I1121 14:58:15.214982  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:58:15.237539  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:58:15.259717  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1121 14:58:15.287235  473269 provision.go:87] duration metric: took 362.449161ms to configureAuth
	I1121 14:58:15.287252  473269 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:58:15.287433  473269 config.go:182] Loaded profile config "cert-expiration-304879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:58:15.287548  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:15.305232  473269 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:15.305543  473269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1121 14:58:15.305555  473269 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:58:20.718874  473269 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:58:20.718887  473269 machine.go:97] duration metric: took 6.30531341s to provisionDockerMachine
	I1121 14:58:20.718897  473269 start.go:293] postStartSetup for "cert-expiration-304879" (driver="docker")
	I1121 14:58:20.718922  473269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:58:20.718997  473269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:58:20.719061  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:20.738351  473269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/cert-expiration-304879/id_rsa Username:docker}
	I1121 14:58:20.841845  473269 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:58:20.845680  473269 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:58:20.845697  473269 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:58:20.845706  473269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 14:58:20.845763  473269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 14:58:20.845840  473269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 14:58:20.845937  473269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:58:20.853799  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:58:20.872681  473269 start.go:296] duration metric: took 153.769151ms for postStartSetup
	I1121 14:58:20.872769  473269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:58:20.872824  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:20.890471  473269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/cert-expiration-304879/id_rsa Username:docker}
	I1121 14:58:20.990077  473269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:58:20.995473  473269 fix.go:56] duration metric: took 6.602182431s for fixHost
	I1121 14:58:20.995499  473269 start.go:83] releasing machines lock for "cert-expiration-304879", held for 6.602235256s
	I1121 14:58:20.995597  473269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-304879
	I1121 14:58:21.015805  473269 ssh_runner.go:195] Run: cat /version.json
	I1121 14:58:21.015848  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:21.016132  473269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:58:21.016178  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:21.041472  473269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/cert-expiration-304879/id_rsa Username:docker}
	I1121 14:58:21.043107  473269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/cert-expiration-304879/id_rsa Username:docker}
	I1121 14:58:21.260778  473269 ssh_runner.go:195] Run: systemctl --version
	I1121 14:58:21.268328  473269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:58:21.326990  473269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:58:21.332237  473269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:58:21.332315  473269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:58:21.342279  473269 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:58:21.342293  473269 start.go:496] detecting cgroup driver to use...
	I1121 14:58:21.342323  473269 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:58:21.342368  473269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:58:21.359157  473269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:58:21.373036  473269 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:58:21.373089  473269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:58:21.389572  473269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:58:21.405208  473269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:58:21.555472  473269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:58:21.695697  473269 docker.go:234] disabling docker service ...
	I1121 14:58:21.695787  473269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:58:21.711562  473269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:58:21.724586  473269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:58:21.867482  473269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:58:22.001924  473269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:58:22.020329  473269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:58:22.045745  473269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:58:22.045804  473269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.055979  473269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 14:58:22.056053  473269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.065599  473269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.076052  473269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.086199  473269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:58:22.096316  473269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.106242  473269 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.116055  473269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:22.126177  473269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:58:22.134959  473269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:58:22.143693  473269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:58:22.280240  473269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:58:22.491298  473269 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:58:22.491356  473269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:58:22.495364  473269 start.go:564] Will wait 60s for crictl version
	I1121 14:58:22.495416  473269 ssh_runner.go:195] Run: which crictl
	I1121 14:58:22.498874  473269 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:58:22.534104  473269 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:58:22.534188  473269 ssh_runner.go:195] Run: crio --version
	I1121 14:58:22.566999  473269 ssh_runner.go:195] Run: crio --version
	I1121 14:58:22.605698  473269 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:58:22.608597  473269 cli_runner.go:164] Run: docker network inspect cert-expiration-304879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:58:22.626087  473269 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:58:22.630333  473269 kubeadm.go:884] updating cluster {Name:cert-expiration-304879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-304879 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:58:22.630443  473269 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:58:22.630496  473269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:58:22.665974  473269 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:58:22.665986  473269 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:58:22.666050  473269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:58:22.698549  473269 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:58:22.698560  473269 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:58:22.698567  473269 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:58:22.698678  473269 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-304879 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-304879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:58:22.698762  473269 ssh_runner.go:195] Run: crio config
	I1121 14:58:22.775315  473269 cni.go:84] Creating CNI manager for ""
	I1121 14:58:22.775325  473269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:58:22.775345  473269 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:58:22.775365  473269 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-304879 NodeName:cert-expiration-304879 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:58:22.775483  473269 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-304879"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:58:22.775555  473269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:58:22.784672  473269 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:58:22.784732  473269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:58:22.792630  473269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1121 14:58:22.806591  473269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:58:22.820119  473269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1121 14:58:22.834025  473269 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:58:22.838197  473269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:58:22.985749  473269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:58:22.999918  473269 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879 for IP: 192.168.85.2
	I1121 14:58:22.999929  473269 certs.go:195] generating shared ca certs ...
	I1121 14:58:22.999958  473269 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:23.000091  473269 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 14:58:23.000127  473269 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 14:58:23.000140  473269 certs.go:257] generating profile certs ...
	W1121 14:58:23.000261  473269 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1121 14:58:23.000282  473269 certs.go:624] cert expired /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.crt: expiration: 2025-11-21 14:57:48 +0000 UTC, now: 2025-11-21 14:58:23.000278419 +0000 UTC m=+8.892127495
	I1121 14:58:23.000466  473269 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.key
	I1121 14:58:23.000482  473269 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.crt with IP's: []
	I1121 14:58:23.520970  473269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.crt ...
	I1121 14:58:23.520992  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.crt: {Name:mke2b8018d1a8373f075edbdacbcd94b71d4da71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:23.521134  473269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.key ...
	I1121 14:58:23.521141  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/client.key: {Name:mk4b26b1af3c295df7694a5711739a9d1afb273a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1121 14:58:23.521314  473269 out.go:285] ! Certificate apiserver.crt.ecf903fc has expired. Generating a new one...
	I1121 14:58:23.521379  473269 certs.go:624] cert expired /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt.ecf903fc: expiration: 2025-11-21 14:57:49 +0000 UTC, now: 2025-11-21 14:58:23.521372647 +0000 UTC m=+9.413221739
	I1121 14:58:23.521492  473269 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.key.ecf903fc
	I1121 14:58:23.521516  473269 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt.ecf903fc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:58:25.011877  473269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt.ecf903fc ...
	I1121 14:58:25.011897  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt.ecf903fc: {Name:mk15f56940687d3aee352b3d3952d5f085991d37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:25.012105  473269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.key.ecf903fc ...
	I1121 14:58:25.012115  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.key.ecf903fc: {Name:mke70d857edecfcfa1a7fc527b901027097f84fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:25.012187  473269 certs.go:382] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt.ecf903fc -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt
	I1121 14:58:25.012344  473269 certs.go:386] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.key.ecf903fc -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.key
	W1121 14:58:25.012765  473269 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1121 14:58:25.012838  473269 certs.go:624] cert expired /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.crt: expiration: 2025-11-21 14:57:49 +0000 UTC, now: 2025-11-21 14:58:25.012831829 +0000 UTC m=+10.904680921
	I1121 14:58:25.012945  473269 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.key
	I1121 14:58:25.012961  473269 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.crt with IP's: []
	I1121 14:58:25.551536  473269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.crt ...
	I1121 14:58:25.551552  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.crt: {Name:mk2015691dcb0f8957b0976b65479b65cc5d3d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:25.551705  473269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.key ...
	I1121 14:58:25.551711  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.key: {Name:mkac077691b605a4432386d18e754befab01c180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:25.551868  473269 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 14:58:25.551903  473269 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 14:58:25.551911  473269 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:58:25.551932  473269 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:58:25.551954  473269 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:58:25.551974  473269 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 14:58:25.552015  473269 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:58:25.557779  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:58:25.625113  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:58:25.673115  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:58:25.736228  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:58:25.802280  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1121 14:58:25.839119  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:58:25.864983  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:58:25.897231  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/cert-expiration-304879/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:58:25.970541  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:58:26.011099  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 14:58:26.038968  473269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 14:58:26.074655  473269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:58:26.113834  473269 ssh_runner.go:195] Run: openssl version
	I1121 14:58:26.133673  473269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:58:26.162595  473269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:58:26.175083  473269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:58:26.175155  473269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:58:26.279984  473269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:58:26.307906  473269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 14:58:26.325007  473269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 14:58:26.329837  473269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 14:58:26.329916  473269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 14:58:26.407554  473269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 14:58:26.418935  473269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 14:58:26.427872  473269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 14:58:26.432856  473269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 14:58:26.432919  473269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 14:58:26.477235  473269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:58:26.487263  473269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:58:26.494837  473269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:58:26.549825  473269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:58:26.612760  473269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:58:26.712411  473269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:58:26.758588  473269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:58:26.830597  473269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:58:26.930869  473269 kubeadm.go:401] StartCluster: {Name:cert-expiration-304879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-304879 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:58:26.930964  473269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:58:26.931033  473269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:58:26.989255  473269 cri.go:89] found id: "f1ed67b102f2ac7bb408e7d8fa839a1444c81e4f881c2a630c966e560d98298b"
	I1121 14:58:26.989266  473269 cri.go:89] found id: "d801b53d8a136fd19b41efcb1e378d61d82510c429834c8e513ca7680a5b6ca4"
	I1121 14:58:26.989270  473269 cri.go:89] found id: "3739ffdcdcda6613176b8c40d35d3c33d50781ba7753628e57558ac860890034"
	I1121 14:58:26.989272  473269 cri.go:89] found id: "28b517f01a5aba3d33fd5f70282d159b50ef6966b14985fa215f6e30ad126e27"
	I1121 14:58:26.989274  473269 cri.go:89] found id: "fb8046c8b0ad46820ce514b73486d5e2fa4988e0136a17d3364ba750f5c37b58"
	I1121 14:58:26.989277  473269 cri.go:89] found id: "aa754288a0b3fcb3fa562f8c2093216eccc54986d8bad4c045324146168063f6"
	I1121 14:58:26.989280  473269 cri.go:89] found id: "11149f54f89b70c9fdeab321afa751e70b1946418c94db977733f1c16d1a5229"
	I1121 14:58:26.989282  473269 cri.go:89] found id: "079513ee1f3b9d029e0d529e5d92ea6f2068de1eecc790b6785a88d2220c39bb"
	I1121 14:58:26.989284  473269 cri.go:89] found id: "fd2fd6dc4b52c638b2a21f506fce4a46733b6b3e7507849dc8eadd6f472f1a82"
	I1121 14:58:26.989291  473269 cri.go:89] found id: "0589701de466d28ce07e223a018a66765f6e4273e402fe86abfeb2e573fc284c"
	I1121 14:58:26.989294  473269 cri.go:89] found id: "898b2db1e294dd813b2b6dbe0d5041c745641e32e01361ac3718a5aea64aa597"
	I1121 14:58:26.989296  473269 cri.go:89] found id: "fda7a21283fbce0af0fffca6465a8ba05057f52a223ad40cbda8a6436b2d6a6c"
	I1121 14:58:26.989298  473269 cri.go:89] found id: "f69aead55eba628672f1b35898ef434110bba356efd2df03b48f98f844648456"
	I1121 14:58:26.989300  473269 cri.go:89] found id: "10de35a2b1c7ad6881fb5756388e3b102104adfee8666f7edf23203753199e59"
	I1121 14:58:26.989302  473269 cri.go:89] found id: "72780083dda6774fb0157c5b64a1c8bbc857f5beee7b4097feceb59929cbda4c"
	I1121 14:58:26.989305  473269 cri.go:89] found id: "9bed70e6e05fe0e8a0f3f7df43c49278d743cfdbd7d613a9aff93210b37b23cd"
	I1121 14:58:26.989309  473269 cri.go:89] found id: ""
	I1121 14:58:26.989361  473269 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:58:27.010619  473269 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:58:27Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:58:27.010716  473269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:58:27.023447  473269 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:58:27.023456  473269 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:58:27.023514  473269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:58:27.038294  473269 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:58:27.039054  473269 kubeconfig.go:125] found "cert-expiration-304879" server: "https://192.168.85.2:8443"
	I1121 14:58:27.040870  473269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:58:27.053708  473269 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 14:58:27.053731  473269 kubeadm.go:602] duration metric: took 30.271001ms to restartPrimaryControlPlane
	I1121 14:58:27.053738  473269 kubeadm.go:403] duration metric: took 122.88132ms to StartCluster
	I1121 14:58:27.053758  473269 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:27.053833  473269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:58:27.054806  473269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:27.055055  473269 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:58:27.055421  473269 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:58:27.055503  473269 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-304879"
	I1121 14:58:27.055518  473269 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-304879"
	W1121 14:58:27.055523  473269 addons.go:248] addon storage-provisioner should already be in state true
	I1121 14:58:27.055551  473269 host.go:66] Checking if "cert-expiration-304879" exists ...
	I1121 14:58:27.056045  473269 cli_runner.go:164] Run: docker container inspect cert-expiration-304879 --format={{.State.Status}}
	I1121 14:58:27.056422  473269 config.go:182] Loaded profile config "cert-expiration-304879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:58:27.056506  473269 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-304879"
	I1121 14:58:27.056518  473269 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-304879"
	I1121 14:58:27.056790  473269 cli_runner.go:164] Run: docker container inspect cert-expiration-304879 --format={{.State.Status}}
	I1121 14:58:27.067921  473269 out.go:179] * Verifying Kubernetes components...
	I1121 14:58:27.073006  473269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:58:27.101514  473269 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-304879"
	W1121 14:58:27.101548  473269 addons.go:248] addon default-storageclass should already be in state true
	I1121 14:58:27.101580  473269 host.go:66] Checking if "cert-expiration-304879" exists ...
	I1121 14:58:27.102025  473269 cli_runner.go:164] Run: docker container inspect cert-expiration-304879 --format={{.State.Status}}
	I1121 14:58:27.102203  473269 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:27.105224  473269 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:58:27.105236  473269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:58:27.105314  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:27.135570  473269 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:58:27.135586  473269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:58:27.135654  473269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-304879
	I1121 14:58:27.141085  473269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/cert-expiration-304879/id_rsa Username:docker}
	I1121 14:58:27.175144  473269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/cert-expiration-304879/id_rsa Username:docker}
	I1121 14:58:27.340860  473269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:58:27.400988  473269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:58:27.486163  473269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	
	
	==> CRI-O <==
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.350063067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.35737289Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.358105125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.383716305Z" level=info msg="Created container ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p/dashboard-metrics-scraper" id=ce10e2ce-8d9b-4e70-a97a-e30600562496 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.385111555Z" level=info msg="Starting container: ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc" id=045bb335-8b2a-49d9-934b-b29f447f74f6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.386763554Z" level=info msg="Started container" PID=1642 containerID=ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p/dashboard-metrics-scraper id=045bb335-8b2a-49d9-934b-b29f447f74f6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3e99fdc6d8eddff0cd7a21cd0ae839da72d82ce29a0f73bef295a8dc01f13b6
	Nov 21 14:58:12 old-k8s-version-357479 conmon[1640]: conmon ab614e073a4e75fbacf5 <ninfo>: container 1642 exited with status 1
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.513509818Z" level=info msg="Removing container: 054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad" id=75cfad08-02bd-4331-bfd2-927a7619e672 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.529615087Z" level=info msg="Error loading conmon cgroup of container 054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad: cgroup deleted" id=75cfad08-02bd-4331-bfd2-927a7619e672 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:58:12 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:12.532844566Z" level=info msg="Removed container 054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p/dashboard-metrics-scraper" id=75cfad08-02bd-4331-bfd2-927a7619e672 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.214420835Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.221014933Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.221185257Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.221263839Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.224514224Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.224548793Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.224573991Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.227643173Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.22767583Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.227701094Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.231645149Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.231679989Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.231702446Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.234651397Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:58:16 old-k8s-version-357479 crio[653]: time="2025-11-21T14:58:16.234686293Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	ab614e073a4e7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   d3e99fdc6d8ed       dashboard-metrics-scraper-5f989dc9cf-trz6p       kubernetes-dashboard
	30b10a3404358       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   79ebd5fbfab7f       storage-provisioner                              kube-system
	4265cf599c2b9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago       Running             kubernetes-dashboard        0                   381404616efc4       kubernetes-dashboard-8694d4445c-87tjm            kubernetes-dashboard
	8a8d5631e51ca       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           56 seconds ago       Running             coredns                     1                   bc18d744650bf       coredns-5dd5756b68-xt9qp                         kube-system
	e957f0820679b       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   2ca0434dbf064       busybox                                          default
	a7380a6949acd       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   778aeef0c5ee1       kube-proxy-f2r9z                                 kube-system
	ea49e692a21cb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   fb7317bf7d01e       kindnet-2bwt6                                    kube-system
	9c2c474dad29f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   79ebd5fbfab7f       storage-provisioner                              kube-system
	96c28d59af9dc       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   3e4a8ed0f467f       etcd-old-k8s-version-357479                      kube-system
	069138de88a3a       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   03f25d6156f03       kube-scheduler-old-k8s-version-357479            kube-system
	53c68e0361bdc       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   342cdf61269b8       kube-controller-manager-old-k8s-version-357479   kube-system
	b0b41441d2ebe       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   25411b0b338e3       kube-apiserver-old-k8s-version-357479            kube-system
	
	
	==> coredns [8a8d5631e51cacf53d506fa4a04f85f6215a24372cb5bc9461c77a553351e692] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38858 - 55604 "HINFO IN 4690593338861365075.4673154822576557002. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011210408s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-357479
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-357479
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=old-k8s-version-357479
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_56_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:56:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-357479
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:58:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:58:05 +0000   Fri, 21 Nov 2025 14:56:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:58:05 +0000   Fri, 21 Nov 2025 14:56:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:58:05 +0000   Fri, 21 Nov 2025 14:56:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:58:05 +0000   Fri, 21 Nov 2025 14:56:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-357479
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                ab0023cc-284b-4a0e-ae5e-24c43711c856
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-xt9qp                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-old-k8s-version-357479                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m4s
	  kube-system                 kindnet-2bwt6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-357479             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-357479    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-f2r9z                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-357479             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-trz6p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-87tjm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s               kubelet          Node old-k8s-version-357479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s               kubelet          Node old-k8s-version-357479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s               kubelet          Node old-k8s-version-357479 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node old-k8s-version-357479 event: Registered Node old-k8s-version-357479 in Controller
	  Normal  NodeReady                98s                kubelet          Node old-k8s-version-357479 status is now: NodeReady
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node old-k8s-version-357479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node old-k8s-version-357479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node old-k8s-version-357479 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-357479 event: Registered Node old-k8s-version-357479 in Controller
	
	
	==> dmesg <==
	[Nov21 14:33] kauditd_printk_skb: 8 callbacks suppressed
	[ +39.333625] overlayfs: idmapped layers are currently not supported
	[Nov21 14:34] overlayfs: idmapped layers are currently not supported
	[Nov21 14:35] overlayfs: idmapped layers are currently not supported
	[Nov21 14:36] overlayfs: idmapped layers are currently not supported
	[Nov21 14:37] overlayfs: idmapped layers are currently not supported
	[Nov21 14:39] overlayfs: idmapped layers are currently not supported
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [96c28d59af9dc044a74c9ec3836f37a7c38007a450b148fbbd3efe7dfe087216] <==
	{"level":"info","ts":"2025-11-21T14:57:30.445856Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:57:30.445885Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:57:30.446058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-21T14:57:30.447216Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-21T14:57:30.450068Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:57:30.450397Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:57:30.46191Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-21T14:57:30.462448Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-21T14:57:30.462607Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-21T14:57:30.47765Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-21T14:57:30.478406Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-21T14:57:32.184415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-21T14:57:32.184538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-21T14:57:32.184587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-21T14:57:32.184635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-21T14:57:32.184666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-21T14:57:32.1847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-21T14:57:32.184732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-21T14:57:32.186423Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-357479 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-21T14:57:32.186514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:57:32.1887Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-21T14:57:32.188775Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-21T14:57:32.188815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:57:32.189776Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-21T14:57:32.194298Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:58:32 up  2:41,  0 user,  load average: 2.26, 2.64, 2.40
	Linux old-k8s-version-357479 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ea49e692a21cb120384182095ffa391f2fb8bcf220d001779c9acdf6bc494b84] <==
	I1121 14:57:36.022893       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:57:36.023993       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 14:57:36.024143       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:57:36.024156       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:57:36.024171       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:57:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:57:36.211554       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:57:36.211571       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:57:36.211579       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:57:36.211952       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:58:06.211898       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 14:58:06.211898       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 14:58:06.211994       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 14:58:06.213262       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1121 14:58:07.612506       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:58:07.612535       1 metrics.go:72] Registering metrics
	I1121 14:58:07.612614       1 controller.go:711] "Syncing nftables rules"
	I1121 14:58:16.214056       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:58:16.214160       1 main.go:301] handling current node
	I1121 14:58:26.224636       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:58:26.224665       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b0b41441d2ebefe39ee2acf353a3ca206cd126618c88325038515b9d85d7f838] <==
	I1121 14:57:34.469325       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1121 14:57:34.469546       1 aggregator.go:166] initial CRD sync complete...
	I1121 14:57:34.469561       1 autoregister_controller.go:141] Starting autoregister controller
	I1121 14:57:34.469567       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:57:34.475952       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:57:34.501510       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1121 14:57:34.548431       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 14:57:34.563031       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1121 14:57:34.563903       1 shared_informer.go:318] Caches are synced for configmaps
	I1121 14:57:34.563985       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1121 14:57:34.564015       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1121 14:57:34.565293       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1121 14:57:34.577084       1 cache.go:39] Caches are synced for autoregister controller
	E1121 14:57:34.597096       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 14:57:35.153692       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:57:36.240781       1 controller.go:624] quota admission added evaluator for: namespaces
	I1121 14:57:36.284308       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1121 14:57:36.311909       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:57:36.325036       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:57:36.335437       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1121 14:57:36.408707       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.198.121"}
	I1121 14:57:36.455952       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.15.30"}
	I1121 14:57:47.131696       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:57:47.139837       1 controller.go:624] quota admission added evaluator for: endpoints
	I1121 14:57:47.169897       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [53c68e0361bdc5178c617b0c2656901eb2b40db3c48b84123beeedd42f17b52b] <==
	I1121 14:57:47.302981       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1121 14:57:47.303120       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-357479"
	I1121 14:57:47.303217       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1121 14:57:47.303284       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1121 14:57:47.303353       1 taint_manager.go:211] "Sending events to api server"
	I1121 14:57:47.304195       1 event.go:307] "Event occurred" object="old-k8s-version-357479" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-357479 event: Registered Node old-k8s-version-357479 in Controller"
	I1121 14:57:47.307691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.899014ms"
	I1121 14:57:47.307938       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.193µs"
	I1121 14:57:47.313001       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:57:47.321910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.629µs"
	I1121 14:57:47.325325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="30.36465ms"
	I1121 14:57:47.325501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="62.155µs"
	I1121 14:57:47.342769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="257.431µs"
	I1121 14:57:47.658356       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:57:47.676783       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:57:47.676817       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1121 14:57:53.466323       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.014µs"
	I1121 14:57:54.476540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.675µs"
	I1121 14:57:55.477771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.249µs"
	I1121 14:57:58.493715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.824158ms"
	I1121 14:57:58.493898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.646µs"
	I1121 14:58:11.162382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.023096ms"
	I1121 14:58:11.162514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.052µs"
	I1121 14:58:12.527199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.431µs"
	I1121 14:58:19.072665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.766µs"
	
	
	==> kube-proxy [a7380a6949acd872625d6f9c045103c7364a54a4fa0520562623923d643e8d9d] <==
	I1121 14:57:35.978630       1 server_others.go:69] "Using iptables proxy"
	I1121 14:57:36.023719       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1121 14:57:36.046494       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:57:36.048332       1 server_others.go:152] "Using iptables Proxier"
	I1121 14:57:36.048476       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1121 14:57:36.048512       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1121 14:57:36.048561       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1121 14:57:36.049191       1 server.go:846] "Version info" version="v1.28.0"
	I1121 14:57:36.049705       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:57:36.054344       1 config.go:188] "Starting service config controller"
	I1121 14:57:36.054419       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1121 14:57:36.054466       1 config.go:97] "Starting endpoint slice config controller"
	I1121 14:57:36.054493       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1121 14:57:36.054908       1 config.go:315] "Starting node config controller"
	I1121 14:57:36.054954       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1121 14:57:36.155212       1 shared_informer.go:318] Caches are synced for service config
	I1121 14:57:36.155256       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1121 14:57:36.155538       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [069138de88a3a043bcedc2015132da995f1f67098719cfcccc8ad8edadcf1c6b] <==
	W1121 14:57:34.456312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1121 14:57:34.456338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1121 14:57:34.456436       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1121 14:57:34.456447       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1121 14:57:34.456519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1121 14:57:34.456537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1121 14:57:34.456606       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1121 14:57:34.456622       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1121 14:57:34.456693       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1121 14:57:34.456708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1121 14:57:34.456776       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1121 14:57:34.456791       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1121 14:57:34.456856       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1121 14:57:34.456871       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1121 14:57:34.456922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1121 14:57:34.456938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1121 14:57:34.457000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1121 14:57:34.457014       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1121 14:57:34.457064       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1121 14:57:34.457080       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1121 14:57:34.457128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1121 14:57:34.457142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1121 14:57:34.457365       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1121 14:57:34.457384       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1121 14:57:35.976487       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 21 14:57:48 old-k8s-version-357479 kubelet[782]: E1121 14:57:48.503557     782 projected.go:198] Error preparing data for projected volume kube-api-access-bvttv for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-87tjm: failed to sync configmap cache: timed out waiting for the condition
	Nov 21 14:57:48 old-k8s-version-357479 kubelet[782]: E1121 14:57:48.503695     782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc172b90-47f0-4d9f-a696-97f474da198a-kube-api-access-bvttv podName:cc172b90-47f0-4d9f-a696-97f474da198a nodeName:}" failed. No retries permitted until 2025-11-21 14:57:49.003669735 +0000 UTC m=+19.896545760 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bvttv" (UniqueName: "kubernetes.io/projected/cc172b90-47f0-4d9f-a696-97f474da198a-kube-api-access-bvttv") pod "kubernetes-dashboard-8694d4445c-87tjm" (UID: "cc172b90-47f0-4d9f-a696-97f474da198a") : failed to sync configmap cache: timed out waiting for the condition
	Nov 21 14:57:48 old-k8s-version-357479 kubelet[782]: E1121 14:57:48.503060     782 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 21 14:57:48 old-k8s-version-357479 kubelet[782]: E1121 14:57:48.503737     782 projected.go:198] Error preparing data for projected volume kube-api-access-t9lj7 for pod kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p: failed to sync configmap cache: timed out waiting for the condition
	Nov 21 14:57:48 old-k8s-version-357479 kubelet[782]: E1121 14:57:48.503767     782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0dc8207b-428e-4ee8-80f6-41913c5d6bc8-kube-api-access-t9lj7 podName:0dc8207b-428e-4ee8-80f6-41913c5d6bc8 nodeName:}" failed. No retries permitted until 2025-11-21 14:57:49.003756981 +0000 UTC m=+19.896633007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t9lj7" (UniqueName: "kubernetes.io/projected/0dc8207b-428e-4ee8-80f6-41913c5d6bc8-kube-api-access-t9lj7") pod "dashboard-metrics-scraper-5f989dc9cf-trz6p" (UID: "0dc8207b-428e-4ee8-80f6-41913c5d6bc8") : failed to sync configmap cache: timed out waiting for the condition
	Nov 21 14:57:49 old-k8s-version-357479 kubelet[782]: W1121 14:57:49.100276     782 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0fe519ab58755ddba395f4036a3b33e16f7643870da0eea05d301e042d57ec19/crio-381404616efc4fa79c3770433ac58b1107bad9995c51dac9fa1b393b29cdab15 WatchSource:0}: Error finding container 381404616efc4fa79c3770433ac58b1107bad9995c51dac9fa1b393b29cdab15: Status 404 returned error can't find the container with id 381404616efc4fa79c3770433ac58b1107bad9995c51dac9fa1b393b29cdab15
	Nov 21 14:57:53 old-k8s-version-357479 kubelet[782]: I1121 14:57:53.447575     782 scope.go:117] "RemoveContainer" containerID="9cc63b663bc90b3fc035a18bd74f5ca123110167ce66f48ff555580a271a5357"
	Nov 21 14:57:54 old-k8s-version-357479 kubelet[782]: I1121 14:57:54.451631     782 scope.go:117] "RemoveContainer" containerID="9cc63b663bc90b3fc035a18bd74f5ca123110167ce66f48ff555580a271a5357"
	Nov 21 14:57:54 old-k8s-version-357479 kubelet[782]: I1121 14:57:54.451988     782 scope.go:117] "RemoveContainer" containerID="054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad"
	Nov 21 14:57:54 old-k8s-version-357479 kubelet[782]: E1121 14:57:54.452261     782 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-trz6p_kubernetes-dashboard(0dc8207b-428e-4ee8-80f6-41913c5d6bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p" podUID="0dc8207b-428e-4ee8-80f6-41913c5d6bc8"
	Nov 21 14:57:55 old-k8s-version-357479 kubelet[782]: I1121 14:57:55.456557     782 scope.go:117] "RemoveContainer" containerID="054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad"
	Nov 21 14:57:55 old-k8s-version-357479 kubelet[782]: E1121 14:57:55.456846     782 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-trz6p_kubernetes-dashboard(0dc8207b-428e-4ee8-80f6-41913c5d6bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p" podUID="0dc8207b-428e-4ee8-80f6-41913c5d6bc8"
	Nov 21 14:57:59 old-k8s-version-357479 kubelet[782]: I1121 14:57:59.053405     782 scope.go:117] "RemoveContainer" containerID="054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad"
	Nov 21 14:57:59 old-k8s-version-357479 kubelet[782]: E1121 14:57:59.053745     782 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-trz6p_kubernetes-dashboard(0dc8207b-428e-4ee8-80f6-41913c5d6bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p" podUID="0dc8207b-428e-4ee8-80f6-41913c5d6bc8"
	Nov 21 14:58:06 old-k8s-version-357479 kubelet[782]: I1121 14:58:06.488038     782 scope.go:117] "RemoveContainer" containerID="9c2c474dad29f36153907aecb633e2ce285822491863ba62f7db3147f7a895c6"
	Nov 21 14:58:06 old-k8s-version-357479 kubelet[782]: I1121 14:58:06.523434     782 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-87tjm" podStartSLOduration=10.738594149 podCreationTimestamp="2025-11-21 14:57:47 +0000 UTC" firstStartedPulling="2025-11-21 14:57:49.104950657 +0000 UTC m=+19.997826683" lastFinishedPulling="2025-11-21 14:57:57.885886037 +0000 UTC m=+28.778762063" observedRunningTime="2025-11-21 14:57:58.479656777 +0000 UTC m=+29.372532803" watchObservedRunningTime="2025-11-21 14:58:06.519529529 +0000 UTC m=+37.412405555"
	Nov 21 14:58:12 old-k8s-version-357479 kubelet[782]: I1121 14:58:12.345373     782 scope.go:117] "RemoveContainer" containerID="054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad"
	Nov 21 14:58:12 old-k8s-version-357479 kubelet[782]: I1121 14:58:12.505566     782 scope.go:117] "RemoveContainer" containerID="054989c8452f2f8d8aa68df59021dd27d96e298c6e5149302548241634e267ad"
	Nov 21 14:58:12 old-k8s-version-357479 kubelet[782]: I1121 14:58:12.505846     782 scope.go:117] "RemoveContainer" containerID="ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc"
	Nov 21 14:58:12 old-k8s-version-357479 kubelet[782]: E1121 14:58:12.506135     782 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-trz6p_kubernetes-dashboard(0dc8207b-428e-4ee8-80f6-41913c5d6bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p" podUID="0dc8207b-428e-4ee8-80f6-41913c5d6bc8"
	Nov 21 14:58:19 old-k8s-version-357479 kubelet[782]: I1121 14:58:19.053434     782 scope.go:117] "RemoveContainer" containerID="ab614e073a4e75fbacf51070b3cef7314d29a2692efef89523b27174f29b53fc"
	Nov 21 14:58:19 old-k8s-version-357479 kubelet[782]: E1121 14:58:19.054253     782 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-trz6p_kubernetes-dashboard(0dc8207b-428e-4ee8-80f6-41913c5d6bc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-trz6p" podUID="0dc8207b-428e-4ee8-80f6-41913c5d6bc8"
	Nov 21 14:58:26 old-k8s-version-357479 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:58:26 old-k8s-version-357479 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:58:26 old-k8s-version-357479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4265cf599c2b9bd90aebc621ed98272108d9ad03647acec31d485ea27a3b7d54] <==
	2025/11/21 14:57:57 Using namespace: kubernetes-dashboard
	2025/11/21 14:57:57 Using in-cluster config to connect to apiserver
	2025/11/21 14:57:57 Using secret token for csrf signing
	2025/11/21 14:57:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 14:57:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 14:57:57 Successful initial request to the apiserver, version: v1.28.0
	2025/11/21 14:57:57 Generating JWE encryption key
	2025/11/21 14:57:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 14:57:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 14:57:58 Initializing JWE encryption key from synchronized object
	2025/11/21 14:57:58 Creating in-cluster Sidecar client
	2025/11/21 14:57:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:57:58 Serving insecurely on HTTP port: 9090
	2025/11/21 14:58:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:57:57 Starting overwatch
	
	
	==> storage-provisioner [30b10a34043583ba1f78c5ab76dd01d93f62c25e440b077d1355ec25ade82c83] <==
	I1121 14:58:06.537528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:58:06.558733       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:58:06.558805       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1121 14:58:23.970266       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:58:23.973376       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-357479_4743d8ba-f91e-4abc-86ce-9934fb3f9dd7!
	I1121 14:58:23.974002       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c725c8a8-b445-4999-8998-79842ae68d9e", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-357479_4743d8ba-f91e-4abc-86ce-9934fb3f9dd7 became leader
	I1121 14:58:24.074989       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-357479_4743d8ba-f91e-4abc-86ce-9934fb3f9dd7!
	
	
	==> storage-provisioner [9c2c474dad29f36153907aecb633e2ce285822491863ba62f7db3147f7a895c6] <==
	I1121 14:57:35.888723       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 14:58:05.891023       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-357479 -n old-k8s-version-357479
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-357479 -n old-k8s-version-357479: exit status 2 (515.226112ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-357479 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (8.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-844780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-844780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (759.737402ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:00:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-844780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-844780 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-844780 describe deploy/metrics-server -n kube-system: exit status 1 (106.185684ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-844780 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-844780
helpers_test.go:243: (dbg) docker inspect no-preload-844780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460",
	        "Created": "2025-11-21T14:58:39.813840429Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 476809,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:58:40.260621472Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/hosts",
	        "LogPath": "/var/lib/docker/containers/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460-json.log",
	        "Name": "/no-preload-844780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-844780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-844780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460",
	                "LowerDir": "/var/lib/docker/overlay2/30aebe0b3ca4716483bf95fa926217cb813474aa3eaf00d1a3a2b419e8a46c7b-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30aebe0b3ca4716483bf95fa926217cb813474aa3eaf00d1a3a2b419e8a46c7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30aebe0b3ca4716483bf95fa926217cb813474aa3eaf00d1a3a2b419e8a46c7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30aebe0b3ca4716483bf95fa926217cb813474aa3eaf00d1a3a2b419e8a46c7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-844780",
	                "Source": "/var/lib/docker/volumes/no-preload-844780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-844780",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-844780",
	                "name.minikube.sigs.k8s.io": "no-preload-844780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "49216c4ae3a2df37479ba54bb450d1d2b616bfb9f4bcde7be6199fd713ca1be7",
	            "SandboxKey": "/var/run/docker/netns/49216c4ae3a2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-844780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:c7:0b:a8:de:bc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "beccd80047d00ade7f2a91d5b368d7f2498703ce72d6db7bd114ead62561b75b",
	                    "EndpointID": "4970f6d52ade00bbf6e9f45a81a74eb6943dfe348ec562d804310f9ca63ff237",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-844780",
	                        "8e592d0d77ca"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-844780 -n no-preload-844780
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-844780 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-844780 logs -n 25: (1.403061434s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-609503 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-609503                │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-609503                │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ ssh     │ -p cilium-609503 sudo crio config                                                                                                                                                                                                             │ cilium-609503                │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ delete  │ -p cilium-609503                                                                                                                                                                                                                              │ cilium-609503                │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │ 21 Nov 25 14:54 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │ 21 Nov 25 14:55 UTC │
	│ delete  │ -p force-systemd-env-360486                                                                                                                                                                                                                   │ force-systemd-env-360486     │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ start   │ -p cert-options-605096 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-605096          │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ ssh     │ cert-options-605096 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-605096          │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ ssh     │ -p cert-options-605096 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-605096          │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ delete  │ -p cert-options-605096                                                                                                                                                                                                                        │ cert-options-605096          │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-357479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │                     │
	│ stop    │ -p old-k8s-version-357479 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-357479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:57 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ image   │ old-k8s-version-357479 image list --format=json                                                                                                                                                                                               │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ pause   │ -p old-k8s-version-357479 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │                     │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p cert-expiration-304879                                                                                                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │                     │
	│ delete  │ -p disable-driver-mounts-984933                                                                                                                                                                                                               │ disable-driver-mounts-984933 │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-844780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:58:38
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:58:38.107056  476289 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:58:38.107169  476289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:58:38.107180  476289 out.go:374] Setting ErrFile to fd 2...
	I1121 14:58:38.107185  476289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:58:38.107447  476289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:58:38.107915  476289 out.go:368] Setting JSON to false
	I1121 14:58:38.109074  476289 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9670,"bootTime":1763727448,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 14:58:38.109155  476289 start.go:143] virtualization:  
	I1121 14:58:38.116549  476289 out.go:179] * [no-preload-844780] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:58:38.125004  476289 notify.go:221] Checking for updates...
	I1121 14:58:38.128671  476289 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:58:38.131653  476289 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:58:38.134583  476289 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:58:38.137911  476289 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 14:58:38.145302  476289 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:58:38.148227  476289 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:58:38.151625  476289 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:58:38.151747  476289 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:58:38.189139  476289 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:58:38.189265  476289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:58:38.280228  476289 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-21 14:58:38.267278621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:58:38.280340  476289 docker.go:319] overlay module found
	I1121 14:58:38.285459  476289 out.go:179] * Using the docker driver based on user configuration
	I1121 14:58:38.296692  476289 start.go:309] selected driver: docker
	I1121 14:58:38.296719  476289 start.go:930] validating driver "docker" against <nil>
	I1121 14:58:38.296734  476289 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:58:38.297445  476289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:58:38.386653  476289 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2025-11-21 14:58:38.374286591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:58:38.386799  476289 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:58:38.387034  476289 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:58:38.392862  476289 out.go:179] * Using Docker driver with root privileges
	I1121 14:58:38.396026  476289 cni.go:84] Creating CNI manager for ""
	I1121 14:58:38.396096  476289 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:58:38.396109  476289 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:58:38.396215  476289 start.go:353] cluster config:
	{Name:no-preload-844780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-844780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:58:38.399524  476289 out.go:179] * Starting "no-preload-844780" primary control-plane node in "no-preload-844780" cluster
	I1121 14:58:38.402613  476289 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:58:38.405717  476289 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:58:38.408755  476289 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:58:38.408881  476289 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/config.json ...
	I1121 14:58:38.408917  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/config.json: {Name:mkfb7cdc2277aa8a4d6474a9a41ca8434a01413b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:38.409077  476289 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:58:38.409260  476289 cache.go:107] acquiring lock: {Name:mk6b29d3694958920b384334ab1c1ec7d74d89cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.409321  476289 cache.go:115] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1121 14:58:38.409333  476289 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 79.419µs
	I1121 14:58:38.409342  476289 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1121 14:58:38.409357  476289 cache.go:107] acquiring lock: {Name:mkc6fd8b8c696cdeb14f732597ce94848629bc57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.409646  476289 cache.go:107] acquiring lock: {Name:mkfc4087bd4b5607802af9dd21b35ce6c4cbcaf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.409822  476289 cache.go:107] acquiring lock: {Name:mk19595a45a9cb004f48547824250752e64a8cd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.410102  476289 cache.go:107] acquiring lock: {Name:mka784778591c360ff95b3dffbd7ca6884371f11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.410399  476289 cache.go:107] acquiring lock: {Name:mk8bf5105492f807bfceabb40550e1d4001f342e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.410611  476289 cache.go:107] acquiring lock: {Name:mk31d53eefe0afe1c3fb10ca1e47af7b59cf7415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.410880  476289 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:38.411077  476289 cache.go:107] acquiring lock: {Name:mkf829b7c112aa5e2eabfd20e6118dc646dc5e50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.411356  476289 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:38.411654  476289 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:38.411889  476289 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:38.412040  476289 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1121 14:58:38.412213  476289 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:38.412269  476289 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:38.423006  476289 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:38.423459  476289 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:38.425424  476289 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:38.425814  476289 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1121 14:58:38.426226  476289 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:38.426522  476289 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:38.426664  476289 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:38.465281  476289 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:58:38.465341  476289 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:58:38.465386  476289 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:58:38.465457  476289 start.go:360] acquireMachinesLock for no-preload-844780: {Name:mke3cf8aa4a5f035751556a1a6fbea0be7cfa7e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.465767  476289 start.go:364] duration metric: took 168.97µs to acquireMachinesLock for "no-preload-844780"
	I1121 14:58:38.465843  476289 start.go:93] Provisioning new machine with config: &{Name:no-preload-844780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-844780 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:58:38.466021  476289 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:58:38.007896  476123 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:58:38.008176  476123 start.go:159] libmachine.API.Create for "embed-certs-902161" (driver="docker")
	I1121 14:58:38.008206  476123 client.go:173] LocalClient.Create starting
	I1121 14:58:38.008298  476123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem
	I1121 14:58:38.008337  476123 main.go:143] libmachine: Decoding PEM data...
	I1121 14:58:38.008351  476123 main.go:143] libmachine: Parsing certificate...
	I1121 14:58:38.008594  476123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem
	I1121 14:58:38.008626  476123 main.go:143] libmachine: Decoding PEM data...
	I1121 14:58:38.008637  476123 main.go:143] libmachine: Parsing certificate...
	I1121 14:58:38.009095  476123 cli_runner.go:164] Run: docker network inspect embed-certs-902161 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:58:38.038623  476123 cli_runner.go:211] docker network inspect embed-certs-902161 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:58:38.038706  476123 network_create.go:284] running [docker network inspect embed-certs-902161] to gather additional debugging logs...
	I1121 14:58:38.038728  476123 cli_runner.go:164] Run: docker network inspect embed-certs-902161
	W1121 14:58:38.062111  476123 cli_runner.go:211] docker network inspect embed-certs-902161 returned with exit code 1
	I1121 14:58:38.062159  476123 network_create.go:287] error running [docker network inspect embed-certs-902161]: docker network inspect embed-certs-902161: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-902161 not found
	I1121 14:58:38.062180  476123 network_create.go:289] output of [docker network inspect embed-certs-902161]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-902161 not found
	
	** /stderr **
	I1121 14:58:38.062281  476123 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:58:38.094490  476123 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-82d3b8bc8a36 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:46:f3:82:e8:95} reservation:<nil>}
	I1121 14:58:38.094794  476123 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-741c868a6917 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:04:b7:a7:98:dc} reservation:<nil>}
	I1121 14:58:38.094973  476123 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-047a1ecabae6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:eb:03:dd:6a:cd} reservation:<nil>}
	I1121 14:58:38.095327  476123 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197cc00}
	I1121 14:58:38.095344  476123 network_create.go:124] attempt to create docker network embed-certs-902161 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1121 14:58:38.095407  476123 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-902161 embed-certs-902161
	I1121 14:58:38.167480  476123 network_create.go:108] docker network embed-certs-902161 192.168.76.0/24 created
	I1121 14:58:38.167515  476123 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-902161" container
	I1121 14:58:38.167590  476123 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:58:38.202298  476123 cli_runner.go:164] Run: docker volume create embed-certs-902161 --label name.minikube.sigs.k8s.io=embed-certs-902161 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:58:38.233876  476123 oci.go:103] Successfully created a docker volume embed-certs-902161
	I1121 14:58:38.233974  476123 cli_runner.go:164] Run: docker run --rm --name embed-certs-902161-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-902161 --entrypoint /usr/bin/test -v embed-certs-902161:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:58:38.969318  476123 oci.go:107] Successfully prepared a docker volume embed-certs-902161
	I1121 14:58:38.969395  476123 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:58:38.969409  476123 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:58:38.969486  476123 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-902161:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:58:38.472452  476289 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:58:38.472760  476289 start.go:159] libmachine.API.Create for "no-preload-844780" (driver="docker")
	I1121 14:58:38.472822  476289 client.go:173] LocalClient.Create starting
	I1121 14:58:38.472914  476289 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem
	I1121 14:58:38.472984  476289 main.go:143] libmachine: Decoding PEM data...
	I1121 14:58:38.473018  476289 main.go:143] libmachine: Parsing certificate...
	I1121 14:58:38.473101  476289 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem
	I1121 14:58:38.473151  476289 main.go:143] libmachine: Decoding PEM data...
	I1121 14:58:38.473178  476289 main.go:143] libmachine: Parsing certificate...
	I1121 14:58:38.473566  476289 cli_runner.go:164] Run: docker network inspect no-preload-844780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:58:38.496587  476289 cli_runner.go:211] docker network inspect no-preload-844780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:58:38.496698  476289 network_create.go:284] running [docker network inspect no-preload-844780] to gather additional debugging logs...
	I1121 14:58:38.496718  476289 cli_runner.go:164] Run: docker network inspect no-preload-844780
	W1121 14:58:38.522456  476289 cli_runner.go:211] docker network inspect no-preload-844780 returned with exit code 1
	I1121 14:58:38.522533  476289 network_create.go:287] error running [docker network inspect no-preload-844780]: docker network inspect no-preload-844780: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-844780 not found
	I1121 14:58:38.522571  476289 network_create.go:289] output of [docker network inspect no-preload-844780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-844780 not found
	
	** /stderr **
	I1121 14:58:38.522736  476289 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:58:38.541240  476289 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-82d3b8bc8a36 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:46:f3:82:e8:95} reservation:<nil>}
	I1121 14:58:38.541667  476289 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-741c868a6917 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:04:b7:a7:98:dc} reservation:<nil>}
	I1121 14:58:38.542019  476289 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-047a1ecabae6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:eb:03:dd:6a:cd} reservation:<nil>}
	I1121 14:58:38.542313  476289 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-353a1d7977a8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:d6:61:83:05:3c} reservation:<nil>}
	I1121 14:58:38.542687  476289 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c78160}
	I1121 14:58:38.542704  476289 network_create.go:124] attempt to create docker network no-preload-844780 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1121 14:58:38.542758  476289 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-844780 no-preload-844780
	I1121 14:58:38.693827  476289 network_create.go:108] docker network no-preload-844780 192.168.85.0/24 created
	I1121 14:58:38.693859  476289 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-844780" container
	I1121 14:58:38.693933  476289 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:58:38.720768  476289 cli_runner.go:164] Run: docker volume create no-preload-844780 --label name.minikube.sigs.k8s.io=no-preload-844780 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:58:38.740206  476289 oci.go:103] Successfully created a docker volume no-preload-844780
	I1121 14:58:38.740293  476289 cli_runner.go:164] Run: docker run --rm --name no-preload-844780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-844780 --entrypoint /usr/bin/test -v no-preload-844780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:58:38.816053  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1121 14:58:38.842130  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:58:38.853391  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:58:38.877818  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1121 14:58:38.877846  476289 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 467.451659ms
	I1121 14:58:38.877863  476289 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1121 14:58:38.883526  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:58:38.907475  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:58:38.922003  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:58:38.965773  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:58:39.396813  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1121 14:58:39.396841  476289 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 986.743052ms
	I1121 14:58:39.396854  476289 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1121 14:58:39.532181  476289 oci.go:107] Successfully prepared a docker volume no-preload-844780
	I1121 14:58:39.532284  476289 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1121 14:58:39.532482  476289 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 14:58:39.532631  476289 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:58:39.695275  476289 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-844780 --name no-preload-844780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-844780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-844780 --network no-preload-844780 --ip 192.168.85.2 --volume no-preload-844780:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:58:40.021539  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1121 14:58:40.021688  476289 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.611870192s
	I1121 14:58:40.021707  476289 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1121 14:58:40.141233  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1121 14:58:40.141269  476289 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.730193481s
	I1121 14:58:40.141286  476289 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1121 14:58:40.151813  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1121 14:58:40.151851  476289 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.742211011s
	I1121 14:58:40.151864  476289 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1121 14:58:40.231599  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1121 14:58:40.231821  476289 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.82245761s
	I1121 14:58:40.231846  476289 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1121 14:58:40.637324  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Running}}
	I1121 14:58:40.683250  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 14:58:40.683662  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1121 14:58:40.683685  476289 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.273083966s
	I1121 14:58:40.683697  476289 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1121 14:58:40.683711  476289 cache.go:87] Successfully saved all images to host disk.
	I1121 14:58:40.712504  476289 cli_runner.go:164] Run: docker exec no-preload-844780 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:58:40.791466  476289 oci.go:144] the created container "no-preload-844780" has a running status.
	I1121 14:58:40.791498  476289 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa...
	I1121 14:58:41.119524  476289 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:58:41.146744  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 14:58:41.173290  476289 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:58:41.173314  476289 kic_runner.go:114] Args: [docker exec --privileged no-preload-844780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:58:41.282434  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 14:58:41.315576  476289 machine.go:94] provisionDockerMachine start ...
	I1121 14:58:41.315675  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:41.342221  476289 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:41.342559  476289 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1121 14:58:41.342599  476289 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:58:41.343368  476289 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48884->127.0.0.1:33428: read: connection reset by peer
	I1121 14:58:43.544090  476123 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-902161:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.574567915s)
	I1121 14:58:43.544123  476123 kic.go:203] duration metric: took 4.574710439s to extract preloaded images to volume ...
	W1121 14:58:43.544255  476123 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 14:58:43.544365  476123 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:58:43.636688  476123 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-902161 --name embed-certs-902161 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-902161 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-902161 --network embed-certs-902161 --ip 192.168.76.2 --volume embed-certs-902161:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:58:43.945616  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Running}}
	I1121 14:58:43.966414  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 14:58:43.989259  476123 cli_runner.go:164] Run: docker exec embed-certs-902161 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:58:44.046109  476123 oci.go:144] the created container "embed-certs-902161" has a running status.
	I1121 14:58:44.046140  476123 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa...
	I1121 14:58:45.678027  476123 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:58:45.702131  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 14:58:45.726060  476123 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:58:45.726086  476123 kic_runner.go:114] Args: [docker exec --privileged embed-certs-902161 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:58:45.766813  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 14:58:45.788348  476123 machine.go:94] provisionDockerMachine start ...
	I1121 14:58:45.788537  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:45.810143  476123 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:45.810474  476123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1121 14:58:45.810485  476123 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:58:45.975940  476123 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-902161
	
	I1121 14:58:45.975962  476123 ubuntu.go:182] provisioning hostname "embed-certs-902161"
	I1121 14:58:45.976025  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:45.993906  476123 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:45.994211  476123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1121 14:58:45.994227  476123 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-902161 && echo "embed-certs-902161" | sudo tee /etc/hostname
	I1121 14:58:46.159333  476123 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-902161
	
	I1121 14:58:46.159414  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:46.178055  476123 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:46.178359  476123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1121 14:58:46.178383  476123 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-902161' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-902161/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-902161' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:58:46.324700  476123 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:58:46.324723  476123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 14:58:46.324746  476123 ubuntu.go:190] setting up certificates
	I1121 14:58:46.324757  476123 provision.go:84] configureAuth start
	I1121 14:58:46.324827  476123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-902161
	I1121 14:58:46.344155  476123 provision.go:143] copyHostCerts
	I1121 14:58:46.344228  476123 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 14:58:46.344246  476123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 14:58:46.344310  476123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 14:58:46.344504  476123 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 14:58:46.344514  476123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 14:58:46.344538  476123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 14:58:46.344600  476123 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 14:58:46.344605  476123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 14:58:46.344625  476123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 14:58:46.344673  476123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.embed-certs-902161 san=[127.0.0.1 192.168.76.2 embed-certs-902161 localhost minikube]
	I1121 14:58:46.713459  476123 provision.go:177] copyRemoteCerts
	I1121 14:58:46.713580  476123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:58:46.713689  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:46.731016  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:58:46.836701  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:58:46.861849  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:58:46.887024  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:58:46.910909  476123 provision.go:87] duration metric: took 586.128191ms to configureAuth
	I1121 14:58:46.910938  476123 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:58:46.911110  476123 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:58:46.911223  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:46.931978  476123 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:46.932454  476123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1121 14:58:46.932507  476123 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:58:47.284317  476123 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:58:47.284338  476123 machine.go:97] duration metric: took 1.495972045s to provisionDockerMachine
	I1121 14:58:47.284364  476123 client.go:176] duration metric: took 9.276136491s to LocalClient.Create
	I1121 14:58:47.284378  476123 start.go:167] duration metric: took 9.276204742s to libmachine.API.Create "embed-certs-902161"
	I1121 14:58:47.284417  476123 start.go:293] postStartSetup for "embed-certs-902161" (driver="docker")
	I1121 14:58:47.284427  476123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:58:47.284485  476123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:58:47.284522  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:47.307837  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:58:47.430114  476123 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:58:47.436681  476123 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:58:47.436759  476123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:58:47.436783  476123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 14:58:47.436875  476123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 14:58:47.436996  476123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 14:58:47.437139  476123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:58:47.449692  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:58:47.480133  476123 start.go:296] duration metric: took 195.701543ms for postStartSetup
	I1121 14:58:47.480603  476123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-902161
	I1121 14:58:47.497174  476123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/config.json ...
	I1121 14:58:47.497466  476123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:58:47.497512  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:47.518276  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:58:47.621549  476123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:58:47.626665  476123 start.go:128] duration metric: took 9.622427094s to createHost
	I1121 14:58:47.626686  476123 start.go:83] releasing machines lock for "embed-certs-902161", held for 9.622551707s
	I1121 14:58:47.626753  476123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-902161
	I1121 14:58:47.646364  476123 ssh_runner.go:195] Run: cat /version.json
	I1121 14:58:47.646415  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:47.646631  476123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:58:47.646700  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:47.673804  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:58:47.684495  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:58:44.580029  476289 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-844780
	
	I1121 14:58:44.580081  476289 ubuntu.go:182] provisioning hostname "no-preload-844780"
	I1121 14:58:44.580176  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:44.623720  476289 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:44.624024  476289 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1121 14:58:44.624035  476289 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-844780 && echo "no-preload-844780" | sudo tee /etc/hostname
	I1121 14:58:44.873351  476289 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-844780
	
	I1121 14:58:44.873708  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:44.940638  476289 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:44.940940  476289 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1121 14:58:44.940956  476289 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-844780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-844780/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-844780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:58:45.127624  476289 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:58:45.127675  476289 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 14:58:45.127720  476289 ubuntu.go:190] setting up certificates
	I1121 14:58:45.127734  476289 provision.go:84] configureAuth start
	I1121 14:58:45.127809  476289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-844780
	I1121 14:58:45.161683  476289 provision.go:143] copyHostCerts
	I1121 14:58:45.161759  476289 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 14:58:45.161769  476289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 14:58:45.161870  476289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 14:58:45.161983  476289 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 14:58:45.161990  476289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 14:58:45.162018  476289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 14:58:45.162076  476289 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 14:58:45.162082  476289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 14:58:45.162106  476289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 14:58:45.162187  476289 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.no-preload-844780 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-844780]
	I1121 14:58:45.642820  476289 provision.go:177] copyRemoteCerts
	I1121 14:58:45.642902  476289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:58:45.642955  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:45.667329  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:58:45.775943  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:58:45.795913  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:58:45.818416  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:58:45.839708  476289 provision.go:87] duration metric: took 711.946003ms to configureAuth
	I1121 14:58:45.839736  476289 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:58:45.839939  476289 config.go:182] Loaded profile config "no-preload-844780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:58:45.840056  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:45.863903  476289 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:45.864217  476289 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1121 14:58:45.864233  476289 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:58:46.258089  476289 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:58:46.258128  476289 machine.go:97] duration metric: took 4.942523595s to provisionDockerMachine
	I1121 14:58:46.258140  476289 client.go:176] duration metric: took 7.785298917s to LocalClient.Create
	I1121 14:58:46.258163  476289 start.go:167] duration metric: took 7.785404453s to libmachine.API.Create "no-preload-844780"
	I1121 14:58:46.258171  476289 start.go:293] postStartSetup for "no-preload-844780" (driver="docker")
	I1121 14:58:46.258187  476289 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:58:46.258278  476289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:58:46.258318  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:46.278676  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:58:46.383109  476289 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:58:46.387454  476289 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:58:46.387479  476289 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:58:46.387490  476289 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 14:58:46.387557  476289 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 14:58:46.387639  476289 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 14:58:46.387745  476289 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:58:46.400866  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:58:46.426748  476289 start.go:296] duration metric: took 168.553116ms for postStartSetup
	I1121 14:58:46.427127  476289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-844780
	I1121 14:58:46.448113  476289 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/config.json ...
	I1121 14:58:46.448574  476289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:58:46.448621  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:46.472967  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:58:46.571133  476289 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:58:46.577531  476289 start.go:128] duration metric: took 8.111446806s to createHost
	I1121 14:58:46.577558  476289 start.go:83] releasing machines lock for "no-preload-844780", held for 8.111748668s
	I1121 14:58:46.577707  476289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-844780
	I1121 14:58:46.600378  476289 ssh_runner.go:195] Run: cat /version.json
	I1121 14:58:46.600430  476289 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:58:46.600477  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:46.600498  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:46.627408  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:58:46.644618  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:58:46.846602  476289 ssh_runner.go:195] Run: systemctl --version
	I1121 14:58:46.853762  476289 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:58:46.899244  476289 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:58:46.904492  476289 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:58:46.904566  476289 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:58:46.942413  476289 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 14:58:46.942433  476289 start.go:496] detecting cgroup driver to use...
	I1121 14:58:46.942468  476289 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:58:46.942531  476289 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:58:46.963041  476289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:58:46.980548  476289 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:58:46.980609  476289 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:58:47.005075  476289 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:58:47.034603  476289 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:58:47.184868  476289 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:58:47.339560  476289 docker.go:234] disabling docker service ...
	I1121 14:58:47.339627  476289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:58:47.362647  476289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:58:47.379729  476289 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:58:47.549236  476289 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:58:47.692937  476289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:58:47.721182  476289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:58:47.741879  476289 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:58:47.741948  476289 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.751089  476289 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 14:58:47.751159  476289 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.760669  476289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.769543  476289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.779636  476289 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:58:47.790283  476289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.800561  476289 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.820545  476289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.835349  476289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:58:47.844280  476289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:58:47.853260  476289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:58:48.013867  476289 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:58:48.225230  476289 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:58:48.225298  476289 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:58:48.233079  476289 start.go:564] Will wait 60s for crictl version
	I1121 14:58:48.233193  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:48.237929  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:58:48.268536  476289 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:58:48.268618  476289 ssh_runner.go:195] Run: crio --version
	I1121 14:58:48.318347  476289 ssh_runner.go:195] Run: crio --version
	I1121 14:58:48.371421  476289 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:58:47.880127  476123 ssh_runner.go:195] Run: systemctl --version
	I1121 14:58:47.887817  476123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:58:47.949809  476123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:58:47.955129  476123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:58:47.955204  476123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:58:47.993073  476123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 14:58:47.993099  476123 start.go:496] detecting cgroup driver to use...
	I1121 14:58:47.993132  476123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:58:47.993185  476123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:58:48.020241  476123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:58:48.038520  476123 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:58:48.038668  476123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:58:48.062828  476123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:58:48.084079  476123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:58:48.248471  476123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:58:48.411485  476123 docker.go:234] disabling docker service ...
	I1121 14:58:48.411551  476123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:58:48.437895  476123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:58:48.454575  476123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:58:48.582268  476123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:58:48.712241  476123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:58:48.725805  476123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:58:48.741181  476123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:58:48.741271  476123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.756986  476123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 14:58:48.757063  476123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.774560  476123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.789680  476123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.798690  476123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:58:48.809184  476123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.823064  476123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.846000  476123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.858546  476123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:58:48.870278  476123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:58:48.882343  476123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:58:49.073044  476123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:58:49.308283  476123 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:58:49.308363  476123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:58:49.313116  476123 start.go:564] Will wait 60s for crictl version
	I1121 14:58:49.313182  476123 ssh_runner.go:195] Run: which crictl
	I1121 14:58:49.324979  476123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:58:49.381216  476123 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:58:49.381320  476123 ssh_runner.go:195] Run: crio --version
	I1121 14:58:49.423912  476123 ssh_runner.go:195] Run: crio --version
	I1121 14:58:49.476504  476123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:58:49.479423  476123 cli_runner.go:164] Run: docker network inspect embed-certs-902161 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:58:49.501765  476123 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 14:58:49.506731  476123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:58:49.523227  476123 kubeadm.go:884] updating cluster {Name:embed-certs-902161 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:58:49.523340  476123 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:58:49.523393  476123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:58:49.574669  476123 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:58:49.574689  476123 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:58:49.574744  476123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:58:49.616766  476123 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:58:49.616787  476123 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:58:49.616795  476123 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1121 14:58:49.616876  476123 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-902161 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:58:49.616953  476123 ssh_runner.go:195] Run: crio config
	I1121 14:58:49.698694  476123 cni.go:84] Creating CNI manager for ""
	I1121 14:58:49.698842  476123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:58:49.698876  476123 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:58:49.698943  476123 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-902161 NodeName:embed-certs-902161 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:58:49.699105  476123 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-902161"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:58:49.699222  476123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:58:49.710761  476123 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:58:49.710834  476123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:58:49.721171  476123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1121 14:58:49.742282  476123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:58:49.767374  476123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1121 14:58:49.791815  476123 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:58:49.798158  476123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:58:49.815650  476123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:58:50.050744  476123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:58:50.089099  476123 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161 for IP: 192.168.76.2
	I1121 14:58:50.089120  476123 certs.go:195] generating shared ca certs ...
	I1121 14:58:50.089137  476123 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:50.089290  476123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 14:58:50.089333  476123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 14:58:50.089349  476123 certs.go:257] generating profile certs ...
	I1121 14:58:50.089410  476123 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.key
	I1121 14:58:50.089421  476123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.crt with IP's: []
	I1121 14:58:50.703976  476123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.crt ...
	I1121 14:58:50.704006  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.crt: {Name:mkec47ccc9c9ed88a1dce4f3a33a8315759141f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:50.704169  476123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.key ...
	I1121 14:58:50.704183  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.key: {Name:mk5eac5c4edceca70c60b5ca0e05d68ada8c79b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:50.704264  476123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key.5d5840b9
	I1121 14:58:50.704281  476123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt.5d5840b9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1121 14:58:51.073201  476123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt.5d5840b9 ...
	I1121 14:58:51.073276  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt.5d5840b9: {Name:mk7b591abb181c69a197ce4593beda8951c37712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:51.073486  476123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key.5d5840b9 ...
	I1121 14:58:51.073522  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key.5d5840b9: {Name:mk64cb9c0bfc236340e6def13d1152f902db06d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:51.073651  476123 certs.go:382] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt.5d5840b9 -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt
	I1121 14:58:51.073778  476123 certs.go:386] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key.5d5840b9 -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key
	I1121 14:58:51.073866  476123 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.key
	I1121 14:58:51.073916  476123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.crt with IP's: []
	I1121 14:58:51.307432  476123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.crt ...
	I1121 14:58:51.308255  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.crt: {Name:mk46d078b26aae6798e6e49bc7315f6b0421a7bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:51.308521  476123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.key ...
	I1121 14:58:51.308564  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.key: {Name:mkd909ebe6563121d3e64a35c4b80b17befbc483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:51.308837  476123 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 14:58:51.308907  476123 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 14:58:51.308933  476123 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:58:51.309010  476123 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:58:51.309056  476123 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:58:51.309111  476123 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 14:58:51.309185  476123 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:58:51.309803  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:58:51.330192  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:58:51.349866  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:58:51.372500  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:58:51.393347  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1121 14:58:51.413256  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:58:51.432548  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:58:51.451901  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:58:51.471693  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:58:51.491516  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 14:58:51.511554  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 14:58:51.531716  476123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:58:51.546186  476123 ssh_runner.go:195] Run: openssl version
	I1121 14:58:51.552865  476123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:58:51.562561  476123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:58:51.566919  476123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:58:51.566998  476123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:58:51.610054  476123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:58:51.618685  476123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 14:58:51.626835  476123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 14:58:51.631098  476123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 14:58:51.631168  476123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 14:58:51.676351  476123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 14:58:51.684703  476123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 14:58:51.692869  476123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 14:58:51.697276  476123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 14:58:51.697341  476123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 14:58:51.763882  476123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:58:51.786221  476123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:58:51.790934  476123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:58:51.790998  476123 kubeadm.go:401] StartCluster: {Name:embed-certs-902161 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:58:51.791072  476123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:58:51.791133  476123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:58:51.849130  476123 cri.go:89] found id: ""
	I1121 14:58:51.849221  476123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:58:51.861018  476123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:58:51.869358  476123 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:58:51.869433  476123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:58:51.882122  476123 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:58:51.882154  476123 kubeadm.go:158] found existing configuration files:
	
	I1121 14:58:51.882213  476123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:58:51.894889  476123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:58:51.894986  476123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:58:51.902528  476123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:58:51.915068  476123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:58:51.915146  476123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:58:51.926210  476123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:58:51.939155  476123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:58:51.939257  476123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:58:51.950129  476123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:58:51.963221  476123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:58:51.963294  476123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:58:51.974199  476123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:58:52.031991  476123 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:58:52.033251  476123 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:58:52.068052  476123 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:58:52.068147  476123 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 14:58:52.068198  476123 kubeadm.go:319] OS: Linux
	I1121 14:58:52.068261  476123 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:58:52.068325  476123 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 14:58:52.068404  476123 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:58:52.068460  476123 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:58:52.068527  476123 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:58:52.068591  476123 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:58:52.068669  476123 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:58:52.068736  476123 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:58:52.068800  476123 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 14:58:52.154694  476123 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:58:52.154838  476123 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:58:52.154942  476123 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:58:52.168773  476123 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:58:52.174748  476123 out.go:252]   - Generating certificates and keys ...
	I1121 14:58:52.174853  476123 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:58:52.174934  476123 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:58:52.576193  476123 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:58:48.374731  476289 cli_runner.go:164] Run: docker network inspect no-preload-844780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:58:48.389954  476289 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:58:48.394227  476289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:58:48.404976  476289 kubeadm.go:884] updating cluster {Name:no-preload-844780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-844780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:58:48.405086  476289 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:58:48.405139  476289 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:58:48.443442  476289 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1121 14:58:48.443465  476289 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1121 14:58:48.443500  476289 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:48.443698  476289 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:48.443782  476289 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:48.443855  476289 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:48.443927  476289 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:48.444000  476289 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1121 14:58:48.444077  476289 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:48.444155  476289 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:48.447143  476289 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:48.447502  476289 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:48.447672  476289 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1121 14:58:48.447805  476289 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:48.447922  476289 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:48.448037  476289 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:48.448152  476289 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:48.448341  476289 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:48.743262  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:48.743891  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:48.748979  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:48.768190  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:48.768745  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:48.783973  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1121 14:58:48.842142  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:48.853558  476289 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1121 14:58:48.853679  476289 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:48.853756  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:48.903815  476289 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1121 14:58:48.903883  476289 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:48.903946  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:48.926862  476289 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1121 14:58:48.926966  476289 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:48.927045  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:49.013557  476289 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1121 14:58:49.013614  476289 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:49.013669  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:49.013731  476289 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1121 14:58:49.013764  476289 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:49.013794  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:49.031073  476289 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1121 14:58:49.031161  476289 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:49.031242  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:49.031357  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:49.031450  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:49.031563  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:49.031665  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:49.031774  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:49.031810  476289 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1121 14:58:49.031888  476289 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1121 14:58:49.031940  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:49.146152  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:49.146358  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:49.146255  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:49.146320  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:49.146502  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:49.146563  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:49.146567  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:58:49.313958  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:58:49.314040  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:49.314091  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:49.314158  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:49.314214  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:49.314281  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:49.314341  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:49.411445  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:58:49.411551  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:58:49.411651  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:58:49.470687  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:58:49.470788  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:58:49.470841  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:58:49.470891  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:58:49.470946  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:49.470994  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:58:49.471050  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:58:49.471096  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:58:49.471143  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:58:49.516247  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1121 14:58:49.516342  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1121 14:58:49.516436  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1121 14:58:49.516450  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1121 14:58:49.565716  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1121 14:58:49.565749  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1121 14:58:49.565807  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1121 14:58:49.565817  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1121 14:58:49.565853  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1121 14:58:49.565864  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1121 14:58:49.565914  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:58:49.565996  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:58:49.566033  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1121 14:58:49.566043  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1121 14:58:49.566080  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1121 14:58:49.566089  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1121 14:58:49.644760  476289 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1121 14:58:49.644923  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:49.657697  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1121 14:58:49.657734  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1121 14:58:49.700420  476289 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1121 14:58:49.700480  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1121 14:58:49.971387  476289 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1121 14:58:49.971483  476289 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:49.971568  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:50.376807  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:50.376865  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1121 14:58:50.486483  476289 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:58:50.486553  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:58:50.580724  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:53.605468  476123 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:58:54.177502  476123 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:58:55.084671  476123 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:58:56.424188  476123 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:58:56.424631  476123 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-902161 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:58:56.602968  476123 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:58:56.603560  476123 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-902161 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:58:57.594687  476123 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:58:53.313063  476289 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (2.826485802s)
	I1121 14:58:53.313090  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1121 14:58:53.313107  476289 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:58:53.313157  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:58:53.313224  476289 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.73247539s)
	I1121 14:58:53.313261  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:54.473693  476289 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.160496275s)
	I1121 14:58:54.473727  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1121 14:58:54.473753  476289 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:58:54.473806  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:58:54.473888  476289 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.16061544s)
	I1121 14:58:54.473917  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1121 14:58:54.473993  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:58:56.920874  476289 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.446857171s)
	I1121 14:58:56.920907  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1121 14:58:56.920931  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1121 14:58:56.921052  476289 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.447229565s)
	I1121 14:58:56.921067  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1121 14:58:56.921084  476289 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:58:56.921129  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:58:58.400627  476123 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:58:58.545520  476123 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:58:58.546029  476123 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:58:58.990478  476123 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:58:59.464735  476123 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:59:00.080480  476123 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:59:00.588324  476123 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:59:00.828782  476123 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:59:00.828881  476123 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:59:00.830380  476123 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:59:00.834147  476123 out.go:252]   - Booting up control plane ...
	I1121 14:59:00.834249  476123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:59:00.834327  476123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:59:00.835328  476123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:59:00.855008  476123 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:59:00.855279  476123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:59:00.864082  476123 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:59:00.864685  476123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:59:00.864860  476123 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:59:01.040815  476123 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:59:01.040944  476123 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:58:58.965580  476289 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.044425306s)
	I1121 14:58:58.965615  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1121 14:58:58.965634  476289 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:58:58.965681  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:59:00.772630  476289 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.806920899s)
	I1121 14:59:00.772658  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1121 14:59:00.772677  476289 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:59:00.772727  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:59:03.039382  476123 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001685187s
	I1121 14:59:03.046063  476123 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:59:03.046177  476123 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1121 14:59:03.046277  476123 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:59:03.046426  476123 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:59:06.151434  476289 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.378681469s)
	I1121 14:59:06.151465  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1121 14:59:06.151482  476289 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:59:06.151530  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:59:07.104611  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1121 14:59:07.104649  476289 cache_images.go:125] Successfully loaded all cached images
	I1121 14:59:07.104656  476289 cache_images.go:94] duration metric: took 18.661179097s to LoadCachedImages
	I1121 14:59:07.104667  476289 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:59:07.104767  476289 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-844780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-844780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:59:07.104858  476289 ssh_runner.go:195] Run: crio config
	I1121 14:59:07.232415  476289 cni.go:84] Creating CNI manager for ""
	I1121 14:59:07.232439  476289 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:59:07.232456  476289 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:59:07.232488  476289 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-844780 NodeName:no-preload-844780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:59:07.232679  476289 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-844780"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:59:07.232787  476289 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:59:07.246212  476289 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1121 14:59:07.246327  476289 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1121 14:59:07.258833  476289 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1121 14:59:07.258908  476289 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1121 14:59:07.259120  476289 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1121 14:59:07.259250  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1121 14:59:07.264475  476289 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1121 14:59:07.264510  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1121 14:59:08.208719  476289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:59:08.249807  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1121 14:59:08.260918  476289 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1121 14:59:08.261000  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1121 14:59:08.302837  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1121 14:59:08.326209  476289 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1121 14:59:08.326299  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1121 14:59:09.106512  476289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:59:09.125361  476289 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1121 14:59:09.143760  476289 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:59:09.172792  476289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1121 14:59:09.202075  476289 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:59:09.209035  476289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:59:09.218830  476289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:59:09.433889  476289 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:59:09.476900  476289 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780 for IP: 192.168.85.2
	I1121 14:59:09.476979  476289 certs.go:195] generating shared ca certs ...
	I1121 14:59:09.477009  476289 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:09.477255  476289 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 14:59:09.477336  476289 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 14:59:09.477379  476289 certs.go:257] generating profile certs ...
	I1121 14:59:09.477469  476289 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.key
	I1121 14:59:09.477501  476289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt with IP's: []
	I1121 14:59:10.240174  476289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt ...
	I1121 14:59:10.240248  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: {Name:mk99392db7bd9e10b58b67eae89522f76d5a1e9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:10.240497  476289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.key ...
	I1121 14:59:10.240532  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.key: {Name:mkedb7c7d0e8e15c68374b08b4b459f1f84322bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:10.240670  476289 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.key.88a7d8ce
	I1121 14:59:10.240708  476289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.crt.88a7d8ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:59:11.123064  476289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.crt.88a7d8ce ...
	I1121 14:59:11.123137  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.crt.88a7d8ce: {Name:mkb75663ef02700dcf7aa1a0f7f0156ca6cd7899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:11.123376  476289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.key.88a7d8ce ...
	I1121 14:59:11.123421  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.key.88a7d8ce: {Name:mkeeae9f7eabc6f3797a27e7f6a3df0ac08eb05a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:11.123559  476289 certs.go:382] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.crt.88a7d8ce -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.crt
	I1121 14:59:11.123684  476289 certs.go:386] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.key.88a7d8ce -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.key
	I1121 14:59:11.123792  476289 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.key
	I1121 14:59:11.123835  476289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.crt with IP's: []
	I1121 14:59:11.402320  476289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.crt ...
	I1121 14:59:11.402391  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.crt: {Name:mk8170063ba1905f83463bd74dcbccbe2033ce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:11.402596  476289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.key ...
	I1121 14:59:11.402631  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.key: {Name:mk32751d5649809e0c1a634c68c9138f872a4276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:11.402933  476289 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 14:59:11.403000  476289 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 14:59:11.403026  476289 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:59:11.403084  476289 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:59:11.403134  476289 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:59:11.403184  476289 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 14:59:11.403252  476289 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:59:11.403838  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:59:11.435880  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:59:11.475203  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:59:11.509821  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:59:11.530503  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:59:11.552667  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:59:11.574739  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:59:11.607121  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:59:11.627228  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 14:59:11.650292  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:59:11.671405  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 14:59:11.690275  476289 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:59:11.703539  476289 ssh_runner.go:195] Run: openssl version
	I1121 14:59:11.710300  476289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:59:11.718760  476289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:59:11.724151  476289 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:59:11.724250  476289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:59:11.767112  476289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:59:11.776089  476289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 14:59:11.785919  476289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 14:59:11.790649  476289 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 14:59:11.790767  476289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 14:59:11.835162  476289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 14:59:11.845247  476289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 14:59:11.854623  476289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 14:59:11.859205  476289 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 14:59:11.859314  476289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 14:59:11.903829  476289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:59:11.913268  476289 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:59:11.917884  476289 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:59:11.917991  476289 kubeadm.go:401] StartCluster: {Name:no-preload-844780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-844780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:59:11.918082  476289 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:59:11.918156  476289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:59:11.948950  476289 cri.go:89] found id: ""
	I1121 14:59:11.949058  476289 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:59:11.959434  476289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:59:11.972765  476289 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:59:11.972881  476289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:59:12.005917  476289 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:59:12.005940  476289 kubeadm.go:158] found existing configuration files:
	
	I1121 14:59:12.006039  476289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:59:12.031231  476289 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:59:12.031348  476289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:59:12.050150  476289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:59:12.069282  476289 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:59:12.069400  476289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:59:12.078750  476289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:59:12.088923  476289 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:59:12.089041  476289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:59:12.097523  476289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:59:12.106790  476289 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:59:12.106903  476289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:59:12.115281  476289 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:59:12.163280  476289 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:59:12.163792  476289 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:59:12.205037  476289 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:59:12.205349  476289 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 14:59:12.205437  476289 kubeadm.go:319] OS: Linux
	I1121 14:59:12.205528  476289 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:59:12.205611  476289 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 14:59:12.205717  476289 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:59:12.205790  476289 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:59:12.205874  476289 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:59:12.205953  476289 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:59:12.206025  476289 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:59:12.206112  476289 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:59:12.206199  476289 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 14:59:12.294846  476289 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:59:12.295025  476289 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:59:12.295147  476289 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:59:12.316724  476289 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:59:10.845757  476123 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.800601827s
	I1121 14:59:11.508710  476123 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 8.463083675s
	I1121 14:59:12.322013  476289 out.go:252]   - Generating certificates and keys ...
	I1121 14:59:12.322172  476289 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:59:12.322281  476289 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:59:12.762910  476289 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:59:13.048274  476123 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.00295778s
	I1121 14:59:13.082253  476123 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:59:13.103124  476123 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:59:13.125298  476123 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:59:13.125795  476123 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-902161 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:59:13.141070  476123 kubeadm.go:319] [bootstrap-token] Using token: rephq1.20w5hkzrb35aw52v
	I1121 14:59:13.143927  476123 out.go:252]   - Configuring RBAC rules ...
	I1121 14:59:13.144053  476123 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:59:13.149965  476123 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:59:13.162505  476123 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:59:13.168771  476123 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:59:13.174433  476123 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:59:13.182093  476123 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:59:13.455866  476123 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:59:14.014297  476123 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:59:14.455995  476123 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:59:14.457566  476123 kubeadm.go:319] 
	I1121 14:59:14.457665  476123 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:59:14.457672  476123 kubeadm.go:319] 
	I1121 14:59:14.457752  476123 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:59:14.457757  476123 kubeadm.go:319] 
	I1121 14:59:14.457783  476123 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:59:14.458273  476123 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:59:14.458340  476123 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:59:14.458346  476123 kubeadm.go:319] 
	I1121 14:59:14.458403  476123 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:59:14.458408  476123 kubeadm.go:319] 
	I1121 14:59:14.458457  476123 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:59:14.458461  476123 kubeadm.go:319] 
	I1121 14:59:14.458516  476123 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:59:14.458594  476123 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:59:14.458665  476123 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:59:14.458670  476123 kubeadm.go:319] 
	I1121 14:59:14.458986  476123 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:59:14.459072  476123 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:59:14.459077  476123 kubeadm.go:319] 
	I1121 14:59:14.459385  476123 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rephq1.20w5hkzrb35aw52v \
	I1121 14:59:14.459499  476123 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 \
	I1121 14:59:14.459744  476123 kubeadm.go:319] 	--control-plane 
	I1121 14:59:14.459754  476123 kubeadm.go:319] 
	I1121 14:59:14.460034  476123 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:59:14.460045  476123 kubeadm.go:319] 
	I1121 14:59:14.460345  476123 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rephq1.20w5hkzrb35aw52v \
	I1121 14:59:14.460684  476123 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 
	I1121 14:59:14.466023  476123 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 14:59:14.466384  476123 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 14:59:14.466554  476123 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:59:14.466591  476123 cni.go:84] Creating CNI manager for ""
	I1121 14:59:14.466628  476123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:59:14.471941  476123 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:59:14.474872  476123 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:59:14.486745  476123 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:59:14.486764  476123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:59:14.515532  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:59:15.008988  476123 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:59:15.009178  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:15.009278  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-902161 minikube.k8s.io/updated_at=2025_11_21T14_59_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=embed-certs-902161 minikube.k8s.io/primary=true
	I1121 14:59:15.307081  476123 ops.go:34] apiserver oom_adj: -16
	I1121 14:59:15.307187  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:15.808254  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:16.307315  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:16.808123  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:17.308192  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:13.307110  476289 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:59:13.399361  476289 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:59:14.083511  476289 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:59:15.191567  476289 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:59:15.191713  476289 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-844780] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:59:15.929787  476289 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:59:15.930334  476289 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-844780] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:59:16.524559  476289 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:59:16.703071  476289 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:59:17.007209  476289 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:59:17.007563  476289 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:59:17.606868  476289 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:59:17.899081  476289 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:59:17.808184  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:18.307855  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:18.807521  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:19.307617  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:19.593516  476123 kubeadm.go:1114] duration metric: took 4.584414734s to wait for elevateKubeSystemPrivileges
	I1121 14:59:19.593547  476123 kubeadm.go:403] duration metric: took 27.802553781s to StartCluster
	I1121 14:59:19.593565  476123 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:19.593624  476123 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:59:19.594619  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:19.594850  476123 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:59:19.594966  476123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:59:19.595216  476123 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:59:19.595259  476123 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:59:19.595325  476123 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-902161"
	I1121 14:59:19.595340  476123 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-902161"
	I1121 14:59:19.595365  476123 host.go:66] Checking if "embed-certs-902161" exists ...
	I1121 14:59:19.595886  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 14:59:19.596354  476123 addons.go:70] Setting default-storageclass=true in profile "embed-certs-902161"
	I1121 14:59:19.596377  476123 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-902161"
	I1121 14:59:19.596675  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 14:59:19.598436  476123 out.go:179] * Verifying Kubernetes components...
	I1121 14:59:19.602045  476123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:59:19.639511  476123 addons.go:239] Setting addon default-storageclass=true in "embed-certs-902161"
	I1121 14:59:19.639554  476123 host.go:66] Checking if "embed-certs-902161" exists ...
	I1121 14:59:19.639722  476123 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:59:18.728763  476289 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:59:19.532499  476289 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:59:21.050512  476289 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:59:21.067595  476289 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:59:21.069593  476289 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:59:19.639970  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 14:59:19.646756  476123 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:59:19.646783  476123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:59:19.646851  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:59:19.692493  476123 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:59:19.692515  476123 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:59:19.692580  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:59:19.707810  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:59:19.728377  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:59:20.141174  476123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:59:20.162326  476123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:59:20.213879  476123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:59:20.248305  476123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:59:21.529137  476123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.38787749s)
	I1121 14:59:21.529202  476123 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.366788017s)
	I1121 14:59:21.529217  476123 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1121 14:59:21.530472  476123 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.316527291s)
	I1121 14:59:21.531290  476123 node_ready.go:35] waiting up to 6m0s for node "embed-certs-902161" to be "Ready" ...
	I1121 14:59:21.531531  476123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.283161437s)
	I1121 14:59:21.592939  476123 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:59:21.595845  476123 addons.go:530] duration metric: took 2.000565832s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:59:22.032910  476123 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-902161" context rescaled to 1 replicas
	I1121 14:59:21.073596  476289 out.go:252]   - Booting up control plane ...
	I1121 14:59:21.073723  476289 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:59:21.073806  476289 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:59:21.075288  476289 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:59:21.123563  476289 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:59:21.123677  476289 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:59:21.131229  476289 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:59:21.131575  476289 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:59:21.131625  476289 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:59:21.324901  476289 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:59:21.325027  476289 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:59:22.825480  476289 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501578452s
	I1121 14:59:22.829119  476289 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:59:22.829220  476289 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1121 14:59:22.829320  476289 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:59:22.830020  476289 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1121 14:59:23.535065  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:26.035000  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	I1121 14:59:26.792198  476289 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.962302785s
	I1121 14:59:28.528851  476289 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.698977576s
	I1121 14:59:29.332013  476289 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502659173s
	I1121 14:59:29.360935  476289 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:59:29.375366  476289 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:59:29.389589  476289 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:59:29.389812  476289 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-844780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:59:29.402078  476289 kubeadm.go:319] [bootstrap-token] Using token: djj5gi.szbg9jrs40jfwzmo
	I1121 14:59:29.405074  476289 out.go:252]   - Configuring RBAC rules ...
	I1121 14:59:29.405206  476289 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:59:29.409190  476289 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:59:29.421994  476289 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:59:29.426771  476289 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:59:29.431448  476289 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:59:29.435746  476289 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:59:29.739403  476289 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:59:30.194260  476289 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:59:30.738826  476289 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:59:30.739944  476289 kubeadm.go:319] 
	I1121 14:59:30.740022  476289 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:59:30.740036  476289 kubeadm.go:319] 
	I1121 14:59:30.740122  476289 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:59:30.740131  476289 kubeadm.go:319] 
	I1121 14:59:30.740158  476289 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:59:30.740223  476289 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:59:30.740279  476289 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:59:30.740287  476289 kubeadm.go:319] 
	I1121 14:59:30.740345  476289 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:59:30.740353  476289 kubeadm.go:319] 
	I1121 14:59:30.740428  476289 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:59:30.740439  476289 kubeadm.go:319] 
	I1121 14:59:30.740495  476289 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:59:30.740579  476289 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:59:30.740655  476289 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:59:30.740663  476289 kubeadm.go:319] 
	I1121 14:59:30.740752  476289 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:59:30.740838  476289 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:59:30.740845  476289 kubeadm.go:319] 
	I1121 14:59:30.740932  476289 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token djj5gi.szbg9jrs40jfwzmo \
	I1121 14:59:30.741044  476289 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 \
	I1121 14:59:30.741071  476289 kubeadm.go:319] 	--control-plane 
	I1121 14:59:30.741080  476289 kubeadm.go:319] 
	I1121 14:59:30.741169  476289 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:59:30.741178  476289 kubeadm.go:319] 
	I1121 14:59:30.741264  476289 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token djj5gi.szbg9jrs40jfwzmo \
	I1121 14:59:30.741375  476289 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 
	I1121 14:59:30.744656  476289 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 14:59:30.744895  476289 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 14:59:30.745011  476289 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:59:30.745032  476289 cni.go:84] Creating CNI manager for ""
	I1121 14:59:30.745040  476289 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:59:30.748373  476289 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1121 14:59:28.534781  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:31.034734  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	I1121 14:59:30.751367  476289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:59:30.755567  476289 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:59:30.755586  476289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:59:30.770481  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:59:31.080468  476289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:59:31.080536  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:31.080621  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-844780 minikube.k8s.io/updated_at=2025_11_21T14_59_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=no-preload-844780 minikube.k8s.io/primary=true
	I1121 14:59:31.107263  476289 ops.go:34] apiserver oom_adj: -16
	I1121 14:59:31.252489  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:31.752594  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:32.253127  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:32.752535  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:33.252546  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:33.752509  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:34.252594  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:34.752829  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:35.253044  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:35.392107  476289 kubeadm.go:1114] duration metric: took 4.311629625s to wait for elevateKubeSystemPrivileges
	I1121 14:59:35.392140  476289 kubeadm.go:403] duration metric: took 23.474154301s to StartCluster
	I1121 14:59:35.392157  476289 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:35.392220  476289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:59:35.393813  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:35.394242  476289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:59:35.394551  476289 config.go:182] Loaded profile config "no-preload-844780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:59:35.394613  476289 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:59:35.394687  476289 addons.go:70] Setting storage-provisioner=true in profile "no-preload-844780"
	I1121 14:59:35.394702  476289 addons.go:239] Setting addon storage-provisioner=true in "no-preload-844780"
	I1121 14:59:35.394726  476289 host.go:66] Checking if "no-preload-844780" exists ...
	I1121 14:59:35.395223  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 14:59:35.395384  476289 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:59:35.395799  476289 addons.go:70] Setting default-storageclass=true in profile "no-preload-844780"
	I1121 14:59:35.395822  476289 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-844780"
	I1121 14:59:35.396101  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 14:59:35.398725  476289 out.go:179] * Verifying Kubernetes components...
	I1121 14:59:35.403918  476289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:59:35.436646  476289 addons.go:239] Setting addon default-storageclass=true in "no-preload-844780"
	I1121 14:59:35.436685  476289 host.go:66] Checking if "no-preload-844780" exists ...
	I1121 14:59:35.437798  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 14:59:35.439933  476289 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:59:35.443174  476289 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:59:35.443196  476289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:59:35.443263  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:59:35.479440  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:59:35.481314  476289 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:59:35.481334  476289 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:59:35.481407  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:59:35.513660  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:59:35.685134  476289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:59:35.723111  476289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:59:35.753823  476289 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:59:35.799789  476289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:59:36.374464  476289 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 14:59:36.832359  476289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.109211431s)
	I1121 14:59:36.832457  476289 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.078606193s)
	I1121 14:59:36.832480  476289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.032672231s)
	I1121 14:59:36.834100  476289 node_ready.go:35] waiting up to 6m0s for node "no-preload-844780" to be "Ready" ...
	I1121 14:59:36.847942  476289 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1121 14:59:33.035499  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:35.535508  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	I1121 14:59:36.850834  476289 addons.go:530] duration metric: took 1.456202682s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:59:36.878893  476289 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-844780" context rescaled to 1 replicas
	W1121 14:59:38.039737  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:40.534969  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:38.841106  476289 node_ready.go:57] node "no-preload-844780" has "Ready":"False" status (will retry)
	W1121 14:59:41.338164  476289 node_ready.go:57] node "no-preload-844780" has "Ready":"False" status (will retry)
	W1121 14:59:43.034906  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:45.041009  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:47.534505  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:43.837103  476289 node_ready.go:57] node "no-preload-844780" has "Ready":"False" status (will retry)
	W1121 14:59:45.837618  476289 node_ready.go:57] node "no-preload-844780" has "Ready":"False" status (will retry)
	W1121 14:59:47.837880  476289 node_ready.go:57] node "no-preload-844780" has "Ready":"False" status (will retry)
	I1121 14:59:49.838799  476289 node_ready.go:49] node "no-preload-844780" is "Ready"
	I1121 14:59:49.838831  476289 node_ready.go:38] duration metric: took 13.00464932s for node "no-preload-844780" to be "Ready" ...
	I1121 14:59:49.838846  476289 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:59:49.838915  476289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:59:49.855056  476289 api_server.go:72] duration metric: took 14.459635044s to wait for apiserver process to appear ...
	I1121 14:59:49.855083  476289 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:59:49.855103  476289 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:59:49.864235  476289 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:59:49.865284  476289 api_server.go:141] control plane version: v1.34.1
	I1121 14:59:49.865309  476289 api_server.go:131] duration metric: took 10.218846ms to wait for apiserver health ...
	I1121 14:59:49.865319  476289 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:59:49.869960  476289 system_pods.go:59] 8 kube-system pods found
	I1121 14:59:49.870059  476289 system_pods.go:61] "coredns-66bc5c9577-2mqjs" [96d5956d-d71f-4509-86fe-94f9c8b6832a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:59:49.870089  476289 system_pods.go:61] "etcd-no-preload-844780" [17c66826-5545-4905-9ef9-a63dc8cc8fa6] Running
	I1121 14:59:49.870179  476289 system_pods.go:61] "kindnet-whwj8" [66ed1cd4-bb39-4b0f-b52e-a4061329e72b] Running
	I1121 14:59:49.870243  476289 system_pods.go:61] "kube-apiserver-no-preload-844780" [b286018d-5cad-4c67-9c97-7853c5c9eef3] Running
	I1121 14:59:49.870318  476289 system_pods.go:61] "kube-controller-manager-no-preload-844780" [0005e01e-7c78-4ee6-a294-7a321177ed07] Running
	I1121 14:59:49.870345  476289 system_pods.go:61] "kube-proxy-2zwvg" [26e02c8a-4f48-4406-8a0c-05fc4951a8c4] Running
	I1121 14:59:49.870361  476289 system_pods.go:61] "kube-scheduler-no-preload-844780" [c5aa6f84-0262-4786-9ba4-b0149e3bc8bb] Running
	I1121 14:59:49.870420  476289 system_pods.go:61] "storage-provisioner" [01c5a82c-94b5-42d1-8159-096f9fdca84a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:59:49.870459  476289 system_pods.go:74] duration metric: took 5.134371ms to wait for pod list to return data ...
	I1121 14:59:49.870485  476289 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:59:49.874699  476289 default_sa.go:45] found service account: "default"
	I1121 14:59:49.874783  476289 default_sa.go:55] duration metric: took 4.277975ms for default service account to be created ...
	I1121 14:59:49.874808  476289 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:59:49.889528  476289 system_pods.go:86] 8 kube-system pods found
	I1121 14:59:49.889563  476289 system_pods.go:89] "coredns-66bc5c9577-2mqjs" [96d5956d-d71f-4509-86fe-94f9c8b6832a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:59:49.889571  476289 system_pods.go:89] "etcd-no-preload-844780" [17c66826-5545-4905-9ef9-a63dc8cc8fa6] Running
	I1121 14:59:49.889577  476289 system_pods.go:89] "kindnet-whwj8" [66ed1cd4-bb39-4b0f-b52e-a4061329e72b] Running
	I1121 14:59:49.889583  476289 system_pods.go:89] "kube-apiserver-no-preload-844780" [b286018d-5cad-4c67-9c97-7853c5c9eef3] Running
	I1121 14:59:49.889588  476289 system_pods.go:89] "kube-controller-manager-no-preload-844780" [0005e01e-7c78-4ee6-a294-7a321177ed07] Running
	I1121 14:59:49.889592  476289 system_pods.go:89] "kube-proxy-2zwvg" [26e02c8a-4f48-4406-8a0c-05fc4951a8c4] Running
	I1121 14:59:49.889596  476289 system_pods.go:89] "kube-scheduler-no-preload-844780" [c5aa6f84-0262-4786-9ba4-b0149e3bc8bb] Running
	I1121 14:59:49.889602  476289 system_pods.go:89] "storage-provisioner" [01c5a82c-94b5-42d1-8159-096f9fdca84a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:59:49.889636  476289 retry.go:31] will retry after 237.389598ms: missing components: kube-dns
	I1121 14:59:50.134184  476289 system_pods.go:86] 8 kube-system pods found
	I1121 14:59:50.134282  476289 system_pods.go:89] "coredns-66bc5c9577-2mqjs" [96d5956d-d71f-4509-86fe-94f9c8b6832a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:59:50.134305  476289 system_pods.go:89] "etcd-no-preload-844780" [17c66826-5545-4905-9ef9-a63dc8cc8fa6] Running
	I1121 14:59:50.134341  476289 system_pods.go:89] "kindnet-whwj8" [66ed1cd4-bb39-4b0f-b52e-a4061329e72b] Running
	I1121 14:59:50.134370  476289 system_pods.go:89] "kube-apiserver-no-preload-844780" [b286018d-5cad-4c67-9c97-7853c5c9eef3] Running
	I1121 14:59:50.134397  476289 system_pods.go:89] "kube-controller-manager-no-preload-844780" [0005e01e-7c78-4ee6-a294-7a321177ed07] Running
	I1121 14:59:50.134415  476289 system_pods.go:89] "kube-proxy-2zwvg" [26e02c8a-4f48-4406-8a0c-05fc4951a8c4] Running
	I1121 14:59:50.134443  476289 system_pods.go:89] "kube-scheduler-no-preload-844780" [c5aa6f84-0262-4786-9ba4-b0149e3bc8bb] Running
	I1121 14:59:50.134471  476289 system_pods.go:89] "storage-provisioner" [01c5a82c-94b5-42d1-8159-096f9fdca84a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:59:50.134502  476289 retry.go:31] will retry after 299.453607ms: missing components: kube-dns
	I1121 14:59:50.439861  476289 system_pods.go:86] 8 kube-system pods found
	I1121 14:59:50.439946  476289 system_pods.go:89] "coredns-66bc5c9577-2mqjs" [96d5956d-d71f-4509-86fe-94f9c8b6832a] Running
	I1121 14:59:50.439969  476289 system_pods.go:89] "etcd-no-preload-844780" [17c66826-5545-4905-9ef9-a63dc8cc8fa6] Running
	I1121 14:59:50.439986  476289 system_pods.go:89] "kindnet-whwj8" [66ed1cd4-bb39-4b0f-b52e-a4061329e72b] Running
	I1121 14:59:50.440021  476289 system_pods.go:89] "kube-apiserver-no-preload-844780" [b286018d-5cad-4c67-9c97-7853c5c9eef3] Running
	I1121 14:59:50.440045  476289 system_pods.go:89] "kube-controller-manager-no-preload-844780" [0005e01e-7c78-4ee6-a294-7a321177ed07] Running
	I1121 14:59:50.440062  476289 system_pods.go:89] "kube-proxy-2zwvg" [26e02c8a-4f48-4406-8a0c-05fc4951a8c4] Running
	I1121 14:59:50.440079  476289 system_pods.go:89] "kube-scheduler-no-preload-844780" [c5aa6f84-0262-4786-9ba4-b0149e3bc8bb] Running
	I1121 14:59:50.440107  476289 system_pods.go:89] "storage-provisioner" [01c5a82c-94b5-42d1-8159-096f9fdca84a] Running
	I1121 14:59:50.440131  476289 system_pods.go:126] duration metric: took 565.298663ms to wait for k8s-apps to be running ...
	I1121 14:59:50.440151  476289 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:59:50.440236  476289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:59:50.464899  476289 system_svc.go:56] duration metric: took 24.736818ms WaitForService to wait for kubelet
	I1121 14:59:50.464976  476289 kubeadm.go:587] duration metric: took 15.069559286s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:59:50.465026  476289 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:59:50.468694  476289 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 14:59:50.468774  476289 node_conditions.go:123] node cpu capacity is 2
	I1121 14:59:50.468801  476289 node_conditions.go:105] duration metric: took 3.742871ms to run NodePressure ...
	I1121 14:59:50.468839  476289 start.go:242] waiting for startup goroutines ...
	I1121 14:59:50.468863  476289 start.go:247] waiting for cluster config update ...
	I1121 14:59:50.468888  476289 start.go:256] writing updated cluster config ...
	I1121 14:59:50.469240  476289 ssh_runner.go:195] Run: rm -f paused
	I1121 14:59:50.473648  476289 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:59:50.478095  476289 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2mqjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.482829  476289 pod_ready.go:94] pod "coredns-66bc5c9577-2mqjs" is "Ready"
	I1121 14:59:50.482901  476289 pod_ready.go:86] duration metric: took 4.744943ms for pod "coredns-66bc5c9577-2mqjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.485258  476289 pod_ready.go:83] waiting for pod "etcd-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.489544  476289 pod_ready.go:94] pod "etcd-no-preload-844780" is "Ready"
	I1121 14:59:50.489617  476289 pod_ready.go:86] duration metric: took 4.291267ms for pod "etcd-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.491813  476289 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.495853  476289 pod_ready.go:94] pod "kube-apiserver-no-preload-844780" is "Ready"
	I1121 14:59:50.495919  476289 pod_ready.go:86] duration metric: took 4.053709ms for pod "kube-apiserver-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.498158  476289 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.878791  476289 pod_ready.go:94] pod "kube-controller-manager-no-preload-844780" is "Ready"
	I1121 14:59:50.878821  476289 pod_ready.go:86] duration metric: took 380.600331ms for pod "kube-controller-manager-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:51.078424  476289 pod_ready.go:83] waiting for pod "kube-proxy-2zwvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:51.477571  476289 pod_ready.go:94] pod "kube-proxy-2zwvg" is "Ready"
	I1121 14:59:51.477654  476289 pod_ready.go:86] duration metric: took 399.200609ms for pod "kube-proxy-2zwvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:51.678645  476289 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:52.078999  476289 pod_ready.go:94] pod "kube-scheduler-no-preload-844780" is "Ready"
	I1121 14:59:52.079029  476289 pod_ready.go:86] duration metric: took 400.355215ms for pod "kube-scheduler-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:52.079043  476289 pod_ready.go:40] duration metric: took 1.605323858s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:59:52.135323  476289 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 14:59:52.138660  476289 out.go:179] * Done! kubectl is now configured to use "no-preload-844780" cluster and "default" namespace by default
	W1121 14:59:49.534897  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:52.034528  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:54.035401  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:56.535268  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:59.034776  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 15:00:01.050734  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	I1121 15:00:01.537529  476123 node_ready.go:49] node "embed-certs-902161" is "Ready"
	I1121 15:00:01.537567  476123 node_ready.go:38] duration metric: took 40.006249354s for node "embed-certs-902161" to be "Ready" ...
	I1121 15:00:01.537583  476123 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:00:01.537670  476123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:00:01.666268  476123 api_server.go:72] duration metric: took 42.07137973s to wait for apiserver process to appear ...
	I1121 15:00:01.666296  476123 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:00:01.666317  476123 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:00:01.760129  476123 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 15:00:01.784777  476123 api_server.go:141] control plane version: v1.34.1
	I1121 15:00:01.784811  476123 api_server.go:131] duration metric: took 118.505232ms to wait for apiserver health ...
	I1121 15:00:01.784821  476123 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:00:01.796179  476123 system_pods.go:59] 8 kube-system pods found
	I1121 15:00:01.796227  476123 system_pods.go:61] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Pending
	I1121 15:00:01.796235  476123 system_pods.go:61] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running
	I1121 15:00:01.796242  476123 system_pods.go:61] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:01.796247  476123 system_pods.go:61] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running
	I1121 15:00:01.796252  476123 system_pods.go:61] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running
	I1121 15:00:01.796257  476123 system_pods.go:61] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:01.796262  476123 system_pods.go:61] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running
	I1121 15:00:01.796272  476123 system_pods.go:61] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:00:01.796281  476123 system_pods.go:74] duration metric: took 11.453641ms to wait for pod list to return data ...
	I1121 15:00:01.796299  476123 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:00:01.841455  476123 default_sa.go:45] found service account: "default"
	I1121 15:00:01.841497  476123 default_sa.go:55] duration metric: took 45.189606ms for default service account to be created ...
	I1121 15:00:01.841509  476123 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 15:00:01.854354  476123 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:01.854388  476123 system_pods.go:89] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:01.854402  476123 system_pods.go:89] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running
	I1121 15:00:01.854411  476123 system_pods.go:89] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:01.854434  476123 system_pods.go:89] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running
	I1121 15:00:01.854440  476123 system_pods.go:89] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running
	I1121 15:00:01.854444  476123 system_pods.go:89] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:01.854448  476123 system_pods.go:89] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running
	I1121 15:00:01.854457  476123 system_pods.go:89] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:00:01.854486  476123 retry.go:31] will retry after 292.977025ms: missing components: kube-dns
	I1121 15:00:02.240221  476123 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:02.240265  476123 system_pods.go:89] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:02.240273  476123 system_pods.go:89] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running
	I1121 15:00:02.240281  476123 system_pods.go:89] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:02.240286  476123 system_pods.go:89] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running
	I1121 15:00:02.240291  476123 system_pods.go:89] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running
	I1121 15:00:02.240296  476123 system_pods.go:89] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:02.240300  476123 system_pods.go:89] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running
	I1121 15:00:02.240307  476123 system_pods.go:89] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:00:02.240324  476123 retry.go:31] will retry after 368.124563ms: missing components: kube-dns
	I1121 15:00:02.629650  476123 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:02.629688  476123 system_pods.go:89] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:02.629697  476123 system_pods.go:89] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running
	I1121 15:00:02.629704  476123 system_pods.go:89] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:02.629710  476123 system_pods.go:89] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running
	I1121 15:00:02.629715  476123 system_pods.go:89] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running
	I1121 15:00:02.629719  476123 system_pods.go:89] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:02.629723  476123 system_pods.go:89] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running
	I1121 15:00:02.629727  476123 system_pods.go:89] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Running
	I1121 15:00:02.629743  476123 retry.go:31] will retry after 346.269936ms: missing components: kube-dns
	
	
	==> CRI-O <==
	Nov 21 14:59:50 no-preload-844780 crio[840]: time="2025-11-21T14:59:50.148854759Z" level=info msg="Created container b2e5c062af8d1420751dcb28c6eb0d71c873d40bc7a8bd889f78fb1a9f3dd4e5: kube-system/coredns-66bc5c9577-2mqjs/coredns" id=d6f50314-2d9f-4549-b260-188ad1a1646c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:59:50 no-preload-844780 crio[840]: time="2025-11-21T14:59:50.149761043Z" level=info msg="Starting container: b2e5c062af8d1420751dcb28c6eb0d71c873d40bc7a8bd889f78fb1a9f3dd4e5" id=6e6285a6-728b-4a4d-9514-fd2b1c5d194c name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:59:50 no-preload-844780 crio[840]: time="2025-11-21T14:59:50.151845769Z" level=info msg="Started container" PID=2507 containerID=b2e5c062af8d1420751dcb28c6eb0d71c873d40bc7a8bd889f78fb1a9f3dd4e5 description=kube-system/coredns-66bc5c9577-2mqjs/coredns id=6e6285a6-728b-4a4d-9514-fd2b1c5d194c name=/runtime.v1.RuntimeService/StartContainer sandboxID=a6f99a828ed404aab16657c4fadd45a0b8f8bd934eef2f729536d6ff888cb8d4
	Nov 21 14:59:52 no-preload-844780 crio[840]: time="2025-11-21T14:59:52.67959127Z" level=info msg="Running pod sandbox: default/busybox/POD" id=92e5414c-decb-4566-8a48-71892fb8c1ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:59:52 no-preload-844780 crio[840]: time="2025-11-21T14:59:52.67966245Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:59:52 no-preload-844780 crio[840]: time="2025-11-21T14:59:52.684800923Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3eb1e5d37705bf4e5bca48e690856173ed9018621d870cadbc28efff4848de7d UID:e4d9947f-b15c-4e85-a63a-57d09cacf149 NetNS:/var/run/netns/09d67b5b-6940-4da8-af9c-5c4b48f3e35a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400157c330}] Aliases:map[]}"
	Nov 21 14:59:52 no-preload-844780 crio[840]: time="2025-11-21T14:59:52.684969467Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 21 14:59:52 no-preload-844780 crio[840]: time="2025-11-21T14:59:52.694678078Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3eb1e5d37705bf4e5bca48e690856173ed9018621d870cadbc28efff4848de7d UID:e4d9947f-b15c-4e85-a63a-57d09cacf149 NetNS:/var/run/netns/09d67b5b-6940-4da8-af9c-5c4b48f3e35a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400157c330}] Aliases:map[]}"
	Nov 21 14:59:52 no-preload-844780 crio[840]: time="2025-11-21T14:59:52.694830991Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 21 14:59:52 no-preload-844780 crio[840]: time="2025-11-21T14:59:52.698487871Z" level=info msg="Ran pod sandbox 3eb1e5d37705bf4e5bca48e690856173ed9018621d870cadbc28efff4848de7d with infra container: default/busybox/POD" id=92e5414c-decb-4566-8a48-71892fb8c1ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:59:52 no-preload-844780 crio[840]: time="2025-11-21T14:59:52.699469635Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d6fd0f99-ec9b-497f-bd11-8a138901c5ba name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:59:52 no-preload-844780 crio[840]: time="2025-11-21T14:59:52.699593082Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d6fd0f99-ec9b-497f-bd11-8a138901c5ba name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:59:52 no-preload-844780 crio[840]: time="2025-11-21T14:59:52.699631122Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d6fd0f99-ec9b-497f-bd11-8a138901c5ba name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:59:52 no-preload-844780 crio[840]: time="2025-11-21T14:59:52.700585447Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=24a58b2c-f7bb-4ba4-b309-2f70ac6609d7 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:59:52 no-preload-844780 crio[840]: time="2025-11-21T14:59:52.704927455Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:59:54 no-preload-844780 crio[840]: time="2025-11-21T14:59:54.692168854Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=24a58b2c-f7bb-4ba4-b309-2f70ac6609d7 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:59:54 no-preload-844780 crio[840]: time="2025-11-21T14:59:54.693010284Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=41725b70-fb18-48c1-9660-7b61e3768a22 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:59:54 no-preload-844780 crio[840]: time="2025-11-21T14:59:54.695587315Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d701c346-e4cc-4d08-8000-e05d5fa1761b name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:59:54 no-preload-844780 crio[840]: time="2025-11-21T14:59:54.701190344Z" level=info msg="Creating container: default/busybox/busybox" id=e6d234a0-8256-411b-9c50-5e090e45a857 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:59:54 no-preload-844780 crio[840]: time="2025-11-21T14:59:54.701362974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:59:54 no-preload-844780 crio[840]: time="2025-11-21T14:59:54.706454931Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:59:54 no-preload-844780 crio[840]: time="2025-11-21T14:59:54.706952037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:59:54 no-preload-844780 crio[840]: time="2025-11-21T14:59:54.723252398Z" level=info msg="Created container f726ab8eb0602c803b7a4684ad516edc5a048e3ff85182f913a86a4cda539458: default/busybox/busybox" id=e6d234a0-8256-411b-9c50-5e090e45a857 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:59:54 no-preload-844780 crio[840]: time="2025-11-21T14:59:54.724720165Z" level=info msg="Starting container: f726ab8eb0602c803b7a4684ad516edc5a048e3ff85182f913a86a4cda539458" id=a87e03a1-6c3e-46ea-97c6-7b41478c5625 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:59:54 no-preload-844780 crio[840]: time="2025-11-21T14:59:54.726562534Z" level=info msg="Started container" PID=2557 containerID=f726ab8eb0602c803b7a4684ad516edc5a048e3ff85182f913a86a4cda539458 description=default/busybox/busybox id=a87e03a1-6c3e-46ea-97c6-7b41478c5625 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3eb1e5d37705bf4e5bca48e690856173ed9018621d870cadbc28efff4848de7d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f726ab8eb0602       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   3eb1e5d37705b       busybox                                     default
	b2e5c062af8d1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   a6f99a828ed40       coredns-66bc5c9577-2mqjs                    kube-system
	f868837295ddd       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   a841da6645e8b       storage-provisioner                         kube-system
	a30f06decfe80       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   8b6f8f675892d       kindnet-whwj8                               kube-system
	6675730065df5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   8bf9cc8a8e504       kube-proxy-2zwvg                            kube-system
	5cd8c8a81a3ed       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      41 seconds ago      Running             kube-apiserver            0                   6a6a7d5729b58       kube-apiserver-no-preload-844780            kube-system
	ac826a5b6cce2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      41 seconds ago      Running             kube-controller-manager   0                   18bf92f58eb47       kube-controller-manager-no-preload-844780   kube-system
	3ab76eb7799d1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      41 seconds ago      Running             kube-scheduler            0                   457c3b612f65a       kube-scheduler-no-preload-844780            kube-system
	e83e54dfef8a5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      41 seconds ago      Running             etcd                      0                   bfd38fe5fcd08       etcd-no-preload-844780                      kube-system
	
	
	==> coredns [b2e5c062af8d1420751dcb28c6eb0d71c873d40bc7a8bd889f78fb1a9f3dd4e5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33313 - 18687 "HINFO IN 6013716031106378336.6787681887347808952. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005559491s
	
	
	==> describe nodes <==
	Name:               no-preload-844780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-844780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=no-preload-844780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_59_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:59:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-844780
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 15:00:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 15:00:01 +0000   Fri, 21 Nov 2025 14:59:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 15:00:01 +0000   Fri, 21 Nov 2025 14:59:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 15:00:01 +0000   Fri, 21 Nov 2025 14:59:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 15:00:01 +0000   Fri, 21 Nov 2025 14:59:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-844780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                0ed5c352-e309-429b-9135-9dfa2d81a7b2
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-2mqjs                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-844780                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-whwj8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-844780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-844780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-2zwvg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-844780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 27s   kube-proxy       
	  Normal   Starting                 42s   kubelet          Starting kubelet.
	  Normal   Starting                 34s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s   kubelet          Node no-preload-844780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s   kubelet          Node no-preload-844780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s   kubelet          Node no-preload-844780 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s   node-controller  Node no-preload-844780 event: Registered Node no-preload-844780 in Controller
	  Normal   NodeReady                15s   kubelet          Node no-preload-844780 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 14:34] overlayfs: idmapped layers are currently not supported
	[Nov21 14:35] overlayfs: idmapped layers are currently not supported
	[Nov21 14:36] overlayfs: idmapped layers are currently not supported
	[Nov21 14:37] overlayfs: idmapped layers are currently not supported
	[Nov21 14:39] overlayfs: idmapped layers are currently not supported
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	[Nov21 14:58] overlayfs: idmapped layers are currently not supported
	[Nov21 14:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e83e54dfef8a5c10261c0c9c167c021beb3c66e2a90ce39d992db535649b1073] <==
	{"level":"warn","ts":"2025-11-21T14:59:25.434687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.459422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.476707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.489286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.537021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.539434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.569802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.586422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.610515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.629877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.643896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.666045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.683669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.695177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.713822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.739609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.751091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.770160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.796609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.818126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.878778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.890985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:25.918269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:26.039121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55664","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:59:36.543913Z","caller":"traceutil/trace.go:172","msg":"trace[1635021722] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"104.786838ms","start":"2025-11-21T14:59:36.439110Z","end":"2025-11-21T14:59:36.543897Z","steps":["trace[1635021722] 'process raft request'  (duration: 66.876528ms)","trace[1635021722] 'compare'  (duration: 37.597543ms)"],"step_count":2}
	
	
	==> kernel <==
	 15:00:04 up  2:42,  0 user,  load average: 3.88, 3.15, 2.61
	Linux no-preload-844780 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a30f06decfe800c9513a2574b83dc6e21c7abc29bdbf981141d6b96b5191055a] <==
	I1121 14:59:39.006310       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:59:39.006807       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:59:39.007016       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:59:39.007063       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:59:39.007099       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:59:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:59:39.210788       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:59:39.300511       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:59:39.300640       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:59:39.301972       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:59:39.601230       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:59:39.601257       1 metrics.go:72] Registering metrics
	I1121 14:59:39.601317       1 controller.go:711] "Syncing nftables rules"
	I1121 14:59:49.216298       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:59:49.216362       1 main.go:301] handling current node
	I1121 14:59:59.211162       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:59:59.211299       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5cd8c8a81a3edc8eef6ae2d07b7f15c654ed40bb5d1a59e1e73c1c98fbf0bdb0] <==
	I1121 14:59:27.284749       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1121 14:59:27.297999       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1121 14:59:27.333661       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:59:27.333939       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:59:27.347865       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:59:27.349936       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:59:27.503751       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:59:27.941520       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:59:27.951958       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:59:27.951984       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:59:28.765294       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:59:28.820739       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:59:29.018755       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:59:29.027108       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:59:29.028523       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:59:29.037797       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:59:29.122265       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:59:30.153009       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:59:30.189090       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:59:30.208141       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:59:34.131600       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:59:34.145213       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:59:34.826467       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:59:35.127288       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1121 15:00:01.982335       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:34834: use of closed network connection
	
	
	==> kube-controller-manager [ac826a5b6cce2fa0a653b1d9880ed05d8e827e0befdb83f9e5e29d9da3acfcf8] <==
	I1121 14:59:34.157274       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:59:34.171676       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:59:34.171944       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 14:59:34.172449       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 14:59:34.172515       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 14:59:34.172590       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-844780"
	I1121 14:59:34.172631       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 14:59:34.172671       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:59:34.173193       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:59:34.173407       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:59:34.173432       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:59:34.173444       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:59:34.173483       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:59:34.173495       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:59:34.173501       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:59:34.173550       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:59:34.173581       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:59:34.173766       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:59:34.174221       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:59:34.174458       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:59:34.174622       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:59:34.178036       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:59:34.179155       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:59:34.196697       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:59:54.175416       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6675730065df5c6e585abf4105a6d63471c40f686400faf45c9471272d29b116] <==
	I1121 14:59:36.079549       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:59:36.206326       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:59:36.306788       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:59:36.306861       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:59:36.306953       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:59:36.395365       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:59:36.395748       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:59:36.416026       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:59:36.419138       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:59:36.419354       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:59:36.442601       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:59:36.442698       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:59:36.442733       1 config.go:200] "Starting service config controller"
	I1121 14:59:36.442737       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:59:36.442761       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:59:36.442766       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:59:36.446205       1 config.go:309] "Starting node config controller"
	I1121 14:59:36.446220       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:59:36.446226       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:59:36.543321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:59:36.543398       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:59:36.543414       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3ab76eb7799d1b3a7652ce5a7dbc17e1c8f701f6390a35128d4510335937c8ec] <==
	I1121 14:59:26.834888       1 serving.go:386] Generated self-signed cert in-memory
	W1121 14:59:28.481463       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 14:59:28.481563       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 14:59:28.481596       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 14:59:28.481635       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 14:59:28.501662       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:59:28.501762       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:59:28.504135       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:59:28.504680       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:59:28.504706       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:59:28.504726       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1121 14:59:28.511366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:59:28.511528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:59:28.530759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 14:59:28.536888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1121 14:59:30.104813       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:59:34 no-preload-844780 kubelet[2020]: I1121 14:59:34.944926    2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26e02c8a-4f48-4406-8a0c-05fc4951a8c4-lib-modules\") pod \"kube-proxy-2zwvg\" (UID: \"26e02c8a-4f48-4406-8a0c-05fc4951a8c4\") " pod="kube-system/kube-proxy-2zwvg"
	Nov 21 14:59:34 no-preload-844780 kubelet[2020]: I1121 14:59:34.945047    2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q48b2\" (UniqueName: \"kubernetes.io/projected/26e02c8a-4f48-4406-8a0c-05fc4951a8c4-kube-api-access-q48b2\") pod \"kube-proxy-2zwvg\" (UID: \"26e02c8a-4f48-4406-8a0c-05fc4951a8c4\") " pod="kube-system/kube-proxy-2zwvg"
	Nov 21 14:59:34 no-preload-844780 kubelet[2020]: I1121 14:59:34.945143    2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66ed1cd4-bb39-4b0f-b52e-a4061329e72b-lib-modules\") pod \"kindnet-whwj8\" (UID: \"66ed1cd4-bb39-4b0f-b52e-a4061329e72b\") " pod="kube-system/kindnet-whwj8"
	Nov 21 14:59:34 no-preload-844780 kubelet[2020]: I1121 14:59:34.945293    2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/26e02c8a-4f48-4406-8a0c-05fc4951a8c4-kube-proxy\") pod \"kube-proxy-2zwvg\" (UID: \"26e02c8a-4f48-4406-8a0c-05fc4951a8c4\") " pod="kube-system/kube-proxy-2zwvg"
	Nov 21 14:59:34 no-preload-844780 kubelet[2020]: I1121 14:59:34.945388    2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82b8j\" (UniqueName: \"kubernetes.io/projected/66ed1cd4-bb39-4b0f-b52e-a4061329e72b-kube-api-access-82b8j\") pod \"kindnet-whwj8\" (UID: \"66ed1cd4-bb39-4b0f-b52e-a4061329e72b\") " pod="kube-system/kindnet-whwj8"
	Nov 21 14:59:35 no-preload-844780 kubelet[2020]: E1121 14:59:35.060783    2020 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 21 14:59:35 no-preload-844780 kubelet[2020]: E1121 14:59:35.060822    2020 projected.go:196] Error preparing data for projected volume kube-api-access-q48b2 for pod kube-system/kube-proxy-2zwvg: configmap "kube-root-ca.crt" not found
	Nov 21 14:59:35 no-preload-844780 kubelet[2020]: E1121 14:59:35.060896    2020 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/26e02c8a-4f48-4406-8a0c-05fc4951a8c4-kube-api-access-q48b2 podName:26e02c8a-4f48-4406-8a0c-05fc4951a8c4 nodeName:}" failed. No retries permitted until 2025-11-21 14:59:35.560870377 +0000 UTC m=+5.576168834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q48b2" (UniqueName: "kubernetes.io/projected/26e02c8a-4f48-4406-8a0c-05fc4951a8c4-kube-api-access-q48b2") pod "kube-proxy-2zwvg" (UID: "26e02c8a-4f48-4406-8a0c-05fc4951a8c4") : configmap "kube-root-ca.crt" not found
	Nov 21 14:59:35 no-preload-844780 kubelet[2020]: E1121 14:59:35.062748    2020 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 21 14:59:35 no-preload-844780 kubelet[2020]: E1121 14:59:35.062778    2020 projected.go:196] Error preparing data for projected volume kube-api-access-82b8j for pod kube-system/kindnet-whwj8: configmap "kube-root-ca.crt" not found
	Nov 21 14:59:35 no-preload-844780 kubelet[2020]: E1121 14:59:35.062849    2020 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/66ed1cd4-bb39-4b0f-b52e-a4061329e72b-kube-api-access-82b8j podName:66ed1cd4-bb39-4b0f-b52e-a4061329e72b nodeName:}" failed. No retries permitted until 2025-11-21 14:59:35.562824542 +0000 UTC m=+5.578122966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-82b8j" (UniqueName: "kubernetes.io/projected/66ed1cd4-bb39-4b0f-b52e-a4061329e72b-kube-api-access-82b8j") pod "kindnet-whwj8" (UID: "66ed1cd4-bb39-4b0f-b52e-a4061329e72b") : configmap "kube-root-ca.crt" not found
	Nov 21 14:59:35 no-preload-844780 kubelet[2020]: I1121 14:59:35.657433    2020 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 21 14:59:35 no-preload-844780 kubelet[2020]: W1121 14:59:35.849307    2020 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/crio-8bf9cc8a8e50455e515bd59f2d6a6e48460f6318078937cba46cfc88cc2870ae WatchSource:0}: Error finding container 8bf9cc8a8e50455e515bd59f2d6a6e48460f6318078937cba46cfc88cc2870ae: Status 404 returned error can't find the container with id 8bf9cc8a8e50455e515bd59f2d6a6e48460f6318078937cba46cfc88cc2870ae
	Nov 21 14:59:36 no-preload-844780 kubelet[2020]: I1121 14:59:36.401624    2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2zwvg" podStartSLOduration=2.401605989 podStartE2EDuration="2.401605989s" podCreationTimestamp="2025-11-21 14:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:59:36.401231814 +0000 UTC m=+6.416530238" watchObservedRunningTime="2025-11-21 14:59:36.401605989 +0000 UTC m=+6.416904421"
	Nov 21 14:59:39 no-preload-844780 kubelet[2020]: I1121 14:59:39.374297    2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-whwj8" podStartSLOduration=2.330905552 podStartE2EDuration="5.374280188s" podCreationTimestamp="2025-11-21 14:59:34 +0000 UTC" firstStartedPulling="2025-11-21 14:59:35.825737456 +0000 UTC m=+5.841035889" lastFinishedPulling="2025-11-21 14:59:38.869112101 +0000 UTC m=+8.884410525" observedRunningTime="2025-11-21 14:59:39.374165052 +0000 UTC m=+9.389463501" watchObservedRunningTime="2025-11-21 14:59:39.374280188 +0000 UTC m=+9.389578620"
	Nov 21 14:59:49 no-preload-844780 kubelet[2020]: I1121 14:59:49.692107    2020 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:59:49 no-preload-844780 kubelet[2020]: I1121 14:59:49.766563    2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/01c5a82c-94b5-42d1-8159-096f9fdca84a-tmp\") pod \"storage-provisioner\" (UID: \"01c5a82c-94b5-42d1-8159-096f9fdca84a\") " pod="kube-system/storage-provisioner"
	Nov 21 14:59:49 no-preload-844780 kubelet[2020]: I1121 14:59:49.766617    2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96d5956d-d71f-4509-86fe-94f9c8b6832a-config-volume\") pod \"coredns-66bc5c9577-2mqjs\" (UID: \"96d5956d-d71f-4509-86fe-94f9c8b6832a\") " pod="kube-system/coredns-66bc5c9577-2mqjs"
	Nov 21 14:59:49 no-preload-844780 kubelet[2020]: I1121 14:59:49.766638    2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh94s\" (UniqueName: \"kubernetes.io/projected/96d5956d-d71f-4509-86fe-94f9c8b6832a-kube-api-access-nh94s\") pod \"coredns-66bc5c9577-2mqjs\" (UID: \"96d5956d-d71f-4509-86fe-94f9c8b6832a\") " pod="kube-system/coredns-66bc5c9577-2mqjs"
	Nov 21 14:59:49 no-preload-844780 kubelet[2020]: I1121 14:59:49.766670    2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxhzx\" (UniqueName: \"kubernetes.io/projected/01c5a82c-94b5-42d1-8159-096f9fdca84a-kube-api-access-dxhzx\") pod \"storage-provisioner\" (UID: \"01c5a82c-94b5-42d1-8159-096f9fdca84a\") " pod="kube-system/storage-provisioner"
	Nov 21 14:59:50 no-preload-844780 kubelet[2020]: W1121 14:59:50.058031    2020 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/crio-a841da6645e8b8eafb42ac3de89289edb97ec8772e62b42574fce130c9834e3e WatchSource:0}: Error finding container a841da6645e8b8eafb42ac3de89289edb97ec8772e62b42574fce130c9834e3e: Status 404 returned error can't find the container with id a841da6645e8b8eafb42ac3de89289edb97ec8772e62b42574fce130c9834e3e
	Nov 21 14:59:50 no-preload-844780 kubelet[2020]: I1121 14:59:50.432905    2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2mqjs" podStartSLOduration=15.43288647 podStartE2EDuration="15.43288647s" podCreationTimestamp="2025-11-21 14:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:59:50.432753283 +0000 UTC m=+20.448051707" watchObservedRunningTime="2025-11-21 14:59:50.43288647 +0000 UTC m=+20.448184894"
	Nov 21 14:59:50 no-preload-844780 kubelet[2020]: I1121 14:59:50.433104    2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.433097861 podStartE2EDuration="14.433097861s" podCreationTimestamp="2025-11-21 14:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:59:50.410439008 +0000 UTC m=+20.425737448" watchObservedRunningTime="2025-11-21 14:59:50.433097861 +0000 UTC m=+20.448396285"
	Nov 21 14:59:52 no-preload-844780 kubelet[2020]: I1121 14:59:52.489103    2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whqc8\" (UniqueName: \"kubernetes.io/projected/e4d9947f-b15c-4e85-a63a-57d09cacf149-kube-api-access-whqc8\") pod \"busybox\" (UID: \"e4d9947f-b15c-4e85-a63a-57d09cacf149\") " pod="default/busybox"
	Nov 21 14:59:52 no-preload-844780 kubelet[2020]: W1121 14:59:52.696830    2020 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/crio-3eb1e5d37705bf4e5bca48e690856173ed9018621d870cadbc28efff4848de7d WatchSource:0}: Error finding container 3eb1e5d37705bf4e5bca48e690856173ed9018621d870cadbc28efff4848de7d: Status 404 returned error can't find the container with id 3eb1e5d37705bf4e5bca48e690856173ed9018621d870cadbc28efff4848de7d
	
	
	==> storage-provisioner [f868837295ddd1e21e5dc22d2dfce3942288db15f8275d572681b1bb5a2b1339] <==
	I1121 14:59:50.126302       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:59:50.178691       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:59:50.179532       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:59:50.193657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:59:50.210310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:59:50.210645       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:59:50.210890       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-844780_00c7ed8e-8ca1-4e6c-a4ef-20aa1295f49b!
	W1121 14:59:50.220922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:59:50.221791       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dbf9e551-e0be-48eb-aa2e-8e3a20e98a71", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-844780_00c7ed8e-8ca1-4e6c-a4ef-20aa1295f49b became leader
	W1121 14:59:50.252261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:59:50.320892       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-844780_00c7ed8e-8ca1-4e6c-a4ef-20aa1295f49b!
	W1121 14:59:52.261969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:59:52.268738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:59:54.272362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:59:54.276914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:59:56.279550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:59:56.284745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:59:58.288316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:59:58.293398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:00.306254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:00.400815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:02.426378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:02.460666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:04.479347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:04.485777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-844780 -n no-preload-844780
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-844780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-902161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-902161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (262.445574ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:00:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-902161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-902161 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-902161 describe deploy/metrics-server -n kube-system: exit status 1 (83.67029ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-902161 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-902161
helpers_test.go:243: (dbg) docker inspect embed-certs-902161:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46",
	        "Created": "2025-11-21T14:58:43.65271767Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477581,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:58:43.715227769Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/hostname",
	        "HostsPath": "/var/lib/docker/containers/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/hosts",
	        "LogPath": "/var/lib/docker/containers/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46-json.log",
	        "Name": "/embed-certs-902161",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-902161:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-902161",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46",
	                "LowerDir": "/var/lib/docker/overlay2/b655fbbd9ad31e0c4853ba9d67f87de572b3d8773fd103fccc5932eb2e963585-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b655fbbd9ad31e0c4853ba9d67f87de572b3d8773fd103fccc5932eb2e963585/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b655fbbd9ad31e0c4853ba9d67f87de572b3d8773fd103fccc5932eb2e963585/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b655fbbd9ad31e0c4853ba9d67f87de572b3d8773fd103fccc5932eb2e963585/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-902161",
	                "Source": "/var/lib/docker/volumes/embed-certs-902161/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-902161",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-902161",
	                "name.minikube.sigs.k8s.io": "embed-certs-902161",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccd9e2f70258b403a1919be31e22dbe95bfaf70050c27427e697b006d3aa8346",
	            "SandboxKey": "/var/run/docker/netns/ccd9e2f70258",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-902161": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:a7:d9:eb:0b:b1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "353a1d7977a8c37987b78fe82de2605299d1e2de5a9662311c657d4b51a465bb",
	                    "EndpointID": "f0ab87fb6527648e42454dbf162f7600fbe4999e34acbc80e2e61072719ad044",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-902161",
	                        "38e73448071a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-902161 -n embed-certs-902161
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-902161 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-902161 logs -n 25: (1.269388123s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-609503 sudo crio config                                                                                                                                                                                                             │ cilium-609503                │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │                     │
	│ delete  │ -p cilium-609503                                                                                                                                                                                                                              │ cilium-609503                │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │ 21 Nov 25 14:54 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:54 UTC │ 21 Nov 25 14:55 UTC │
	│ delete  │ -p force-systemd-env-360486                                                                                                                                                                                                                   │ force-systemd-env-360486     │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ start   │ -p cert-options-605096 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-605096          │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ ssh     │ cert-options-605096 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-605096          │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ ssh     │ -p cert-options-605096 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-605096          │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ delete  │ -p cert-options-605096                                                                                                                                                                                                                        │ cert-options-605096          │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-357479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │                     │
	│ stop    │ -p old-k8s-version-357479 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-357479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:57 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ image   │ old-k8s-version-357479 image list --format=json                                                                                                                                                                                               │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ pause   │ -p old-k8s-version-357479 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │                     │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p cert-expiration-304879                                                                                                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 15:00 UTC │
	│ delete  │ -p disable-driver-mounts-984933                                                                                                                                                                                                               │ disable-driver-mounts-984933 │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-844780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ stop    │ -p no-preload-844780 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-902161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:58:38
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:58:38.107056  476289 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:58:38.107169  476289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:58:38.107180  476289 out.go:374] Setting ErrFile to fd 2...
	I1121 14:58:38.107185  476289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:58:38.107447  476289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:58:38.107915  476289 out.go:368] Setting JSON to false
	I1121 14:58:38.109074  476289 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9670,"bootTime":1763727448,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 14:58:38.109155  476289 start.go:143] virtualization:  
	I1121 14:58:38.116549  476289 out.go:179] * [no-preload-844780] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:58:38.125004  476289 notify.go:221] Checking for updates...
	I1121 14:58:38.128671  476289 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:58:38.131653  476289 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:58:38.134583  476289 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:58:38.137911  476289 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 14:58:38.145302  476289 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:58:38.148227  476289 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:58:38.151625  476289 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:58:38.151747  476289 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:58:38.189139  476289 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:58:38.189265  476289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:58:38.280228  476289 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-21 14:58:38.267278621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:58:38.280340  476289 docker.go:319] overlay module found
	I1121 14:58:38.285459  476289 out.go:179] * Using the docker driver based on user configuration
	I1121 14:58:38.296692  476289 start.go:309] selected driver: docker
	I1121 14:58:38.296719  476289 start.go:930] validating driver "docker" against <nil>
	I1121 14:58:38.296734  476289 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:58:38.297445  476289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:58:38.386653  476289 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2025-11-21 14:58:38.374286591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:58:38.386799  476289 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:58:38.387034  476289 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:58:38.392862  476289 out.go:179] * Using Docker driver with root privileges
	I1121 14:58:38.396026  476289 cni.go:84] Creating CNI manager for ""
	I1121 14:58:38.396096  476289 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:58:38.396109  476289 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:58:38.396215  476289 start.go:353] cluster config:
	{Name:no-preload-844780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-844780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:58:38.399524  476289 out.go:179] * Starting "no-preload-844780" primary control-plane node in "no-preload-844780" cluster
	I1121 14:58:38.402613  476289 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:58:38.405717  476289 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:58:38.408755  476289 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:58:38.408881  476289 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/config.json ...
	I1121 14:58:38.408917  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/config.json: {Name:mkfb7cdc2277aa8a4d6474a9a41ca8434a01413b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:38.409077  476289 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:58:38.409260  476289 cache.go:107] acquiring lock: {Name:mk6b29d3694958920b384334ab1c1ec7d74d89cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.409321  476289 cache.go:115] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1121 14:58:38.409333  476289 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 79.419µs
	I1121 14:58:38.409342  476289 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1121 14:58:38.409357  476289 cache.go:107] acquiring lock: {Name:mkc6fd8b8c696cdeb14f732597ce94848629bc57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.409646  476289 cache.go:107] acquiring lock: {Name:mkfc4087bd4b5607802af9dd21b35ce6c4cbcaf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.409822  476289 cache.go:107] acquiring lock: {Name:mk19595a45a9cb004f48547824250752e64a8cd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.410102  476289 cache.go:107] acquiring lock: {Name:mka784778591c360ff95b3dffbd7ca6884371f11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.410399  476289 cache.go:107] acquiring lock: {Name:mk8bf5105492f807bfceabb40550e1d4001f342e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.410611  476289 cache.go:107] acquiring lock: {Name:mk31d53eefe0afe1c3fb10ca1e47af7b59cf7415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.410880  476289 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:38.411077  476289 cache.go:107] acquiring lock: {Name:mkf829b7c112aa5e2eabfd20e6118dc646dc5e50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.411356  476289 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:38.411654  476289 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:38.411889  476289 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:38.412040  476289 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1121 14:58:38.412213  476289 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:38.412269  476289 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:38.423006  476289 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:38.423459  476289 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:38.425424  476289 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:38.425814  476289 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1121 14:58:38.426226  476289 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:38.426522  476289 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:38.426664  476289 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:38.465281  476289 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:58:38.465341  476289 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:58:38.465386  476289 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:58:38.465457  476289 start.go:360] acquireMachinesLock for no-preload-844780: {Name:mke3cf8aa4a5f035751556a1a6fbea0be7cfa7e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:58:38.465767  476289 start.go:364] duration metric: took 168.97µs to acquireMachinesLock for "no-preload-844780"
	I1121 14:58:38.465843  476289 start.go:93] Provisioning new machine with config: &{Name:no-preload-844780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-844780 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:58:38.466021  476289 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:58:38.007896  476123 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:58:38.008176  476123 start.go:159] libmachine.API.Create for "embed-certs-902161" (driver="docker")
	I1121 14:58:38.008206  476123 client.go:173] LocalClient.Create starting
	I1121 14:58:38.008298  476123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem
	I1121 14:58:38.008337  476123 main.go:143] libmachine: Decoding PEM data...
	I1121 14:58:38.008351  476123 main.go:143] libmachine: Parsing certificate...
	I1121 14:58:38.008594  476123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem
	I1121 14:58:38.008626  476123 main.go:143] libmachine: Decoding PEM data...
	I1121 14:58:38.008637  476123 main.go:143] libmachine: Parsing certificate...
	I1121 14:58:38.009095  476123 cli_runner.go:164] Run: docker network inspect embed-certs-902161 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:58:38.038623  476123 cli_runner.go:211] docker network inspect embed-certs-902161 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:58:38.038706  476123 network_create.go:284] running [docker network inspect embed-certs-902161] to gather additional debugging logs...
	I1121 14:58:38.038728  476123 cli_runner.go:164] Run: docker network inspect embed-certs-902161
	W1121 14:58:38.062111  476123 cli_runner.go:211] docker network inspect embed-certs-902161 returned with exit code 1
	I1121 14:58:38.062159  476123 network_create.go:287] error running [docker network inspect embed-certs-902161]: docker network inspect embed-certs-902161: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-902161 not found
	I1121 14:58:38.062180  476123 network_create.go:289] output of [docker network inspect embed-certs-902161]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-902161 not found
	
	** /stderr **
	I1121 14:58:38.062281  476123 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:58:38.094490  476123 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-82d3b8bc8a36 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:46:f3:82:e8:95} reservation:<nil>}
	I1121 14:58:38.094794  476123 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-741c868a6917 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:04:b7:a7:98:dc} reservation:<nil>}
	I1121 14:58:38.094973  476123 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-047a1ecabae6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:eb:03:dd:6a:cd} reservation:<nil>}
	I1121 14:58:38.095327  476123 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197cc00}
	I1121 14:58:38.095344  476123 network_create.go:124] attempt to create docker network embed-certs-902161 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1121 14:58:38.095407  476123 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-902161 embed-certs-902161
	I1121 14:58:38.167480  476123 network_create.go:108] docker network embed-certs-902161 192.168.76.0/24 created
	I1121 14:58:38.167515  476123 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-902161" container
	I1121 14:58:38.167590  476123 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:58:38.202298  476123 cli_runner.go:164] Run: docker volume create embed-certs-902161 --label name.minikube.sigs.k8s.io=embed-certs-902161 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:58:38.233876  476123 oci.go:103] Successfully created a docker volume embed-certs-902161
	I1121 14:58:38.233974  476123 cli_runner.go:164] Run: docker run --rm --name embed-certs-902161-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-902161 --entrypoint /usr/bin/test -v embed-certs-902161:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:58:38.969318  476123 oci.go:107] Successfully prepared a docker volume embed-certs-902161
	I1121 14:58:38.969395  476123 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:58:38.969409  476123 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:58:38.969486  476123 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-902161:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:58:38.472452  476289 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:58:38.472760  476289 start.go:159] libmachine.API.Create for "no-preload-844780" (driver="docker")
	I1121 14:58:38.472822  476289 client.go:173] LocalClient.Create starting
	I1121 14:58:38.472914  476289 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem
	I1121 14:58:38.472984  476289 main.go:143] libmachine: Decoding PEM data...
	I1121 14:58:38.473018  476289 main.go:143] libmachine: Parsing certificate...
	I1121 14:58:38.473101  476289 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem
	I1121 14:58:38.473151  476289 main.go:143] libmachine: Decoding PEM data...
	I1121 14:58:38.473178  476289 main.go:143] libmachine: Parsing certificate...
	I1121 14:58:38.473566  476289 cli_runner.go:164] Run: docker network inspect no-preload-844780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:58:38.496587  476289 cli_runner.go:211] docker network inspect no-preload-844780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:58:38.496698  476289 network_create.go:284] running [docker network inspect no-preload-844780] to gather additional debugging logs...
	I1121 14:58:38.496718  476289 cli_runner.go:164] Run: docker network inspect no-preload-844780
	W1121 14:58:38.522456  476289 cli_runner.go:211] docker network inspect no-preload-844780 returned with exit code 1
	I1121 14:58:38.522533  476289 network_create.go:287] error running [docker network inspect no-preload-844780]: docker network inspect no-preload-844780: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-844780 not found
	I1121 14:58:38.522571  476289 network_create.go:289] output of [docker network inspect no-preload-844780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-844780 not found
	
	** /stderr **
	I1121 14:58:38.522736  476289 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:58:38.541240  476289 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-82d3b8bc8a36 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:46:f3:82:e8:95} reservation:<nil>}
	I1121 14:58:38.541667  476289 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-741c868a6917 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:04:b7:a7:98:dc} reservation:<nil>}
	I1121 14:58:38.542019  476289 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-047a1ecabae6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:eb:03:dd:6a:cd} reservation:<nil>}
	I1121 14:58:38.542313  476289 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-353a1d7977a8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:d6:61:83:05:3c} reservation:<nil>}
	I1121 14:58:38.542687  476289 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c78160}
	I1121 14:58:38.542704  476289 network_create.go:124] attempt to create docker network no-preload-844780 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1121 14:58:38.542758  476289 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-844780 no-preload-844780
	I1121 14:58:38.693827  476289 network_create.go:108] docker network no-preload-844780 192.168.85.0/24 created
	I1121 14:58:38.693859  476289 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-844780" container
	I1121 14:58:38.693933  476289 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:58:38.720768  476289 cli_runner.go:164] Run: docker volume create no-preload-844780 --label name.minikube.sigs.k8s.io=no-preload-844780 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:58:38.740206  476289 oci.go:103] Successfully created a docker volume no-preload-844780
	I1121 14:58:38.740293  476289 cli_runner.go:164] Run: docker run --rm --name no-preload-844780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-844780 --entrypoint /usr/bin/test -v no-preload-844780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:58:38.816053  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1121 14:58:38.842130  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:58:38.853391  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:58:38.877818  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1121 14:58:38.877846  476289 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 467.451659ms
	I1121 14:58:38.877863  476289 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1121 14:58:38.883526  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:58:38.907475  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:58:38.922003  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:58:38.965773  476289 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:58:39.396813  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1121 14:58:39.396841  476289 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 986.743052ms
	I1121 14:58:39.396854  476289 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1121 14:58:39.532181  476289 oci.go:107] Successfully prepared a docker volume no-preload-844780
	I1121 14:58:39.532284  476289 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1121 14:58:39.532482  476289 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 14:58:39.532631  476289 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:58:39.695275  476289 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-844780 --name no-preload-844780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-844780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-844780 --network no-preload-844780 --ip 192.168.85.2 --volume no-preload-844780:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:58:40.021539  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1121 14:58:40.021688  476289 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.611870192s
	I1121 14:58:40.021707  476289 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1121 14:58:40.141233  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1121 14:58:40.141269  476289 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.730193481s
	I1121 14:58:40.141286  476289 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1121 14:58:40.151813  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1121 14:58:40.151851  476289 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.742211011s
	I1121 14:58:40.151864  476289 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1121 14:58:40.231599  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1121 14:58:40.231821  476289 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.82245761s
	I1121 14:58:40.231846  476289 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1121 14:58:40.637324  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Running}}
	I1121 14:58:40.683250  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 14:58:40.683662  476289 cache.go:157] /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1121 14:58:40.683685  476289 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.273083966s
	I1121 14:58:40.683697  476289 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1121 14:58:40.683711  476289 cache.go:87] Successfully saved all images to host disk.
	I1121 14:58:40.712504  476289 cli_runner.go:164] Run: docker exec no-preload-844780 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:58:40.791466  476289 oci.go:144] the created container "no-preload-844780" has a running status.
	I1121 14:58:40.791498  476289 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa...
	I1121 14:58:41.119524  476289 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:58:41.146744  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 14:58:41.173290  476289 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:58:41.173314  476289 kic_runner.go:114] Args: [docker exec --privileged no-preload-844780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:58:41.282434  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 14:58:41.315576  476289 machine.go:94] provisionDockerMachine start ...
	I1121 14:58:41.315675  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:41.342221  476289 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:41.342559  476289 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1121 14:58:41.342599  476289 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:58:41.343368  476289 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48884->127.0.0.1:33428: read: connection reset by peer
	I1121 14:58:43.544090  476123 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-902161:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.574567915s)
	I1121 14:58:43.544123  476123 kic.go:203] duration metric: took 4.574710439s to extract preloaded images to volume ...
	W1121 14:58:43.544255  476123 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 14:58:43.544365  476123 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:58:43.636688  476123 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-902161 --name embed-certs-902161 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-902161 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-902161 --network embed-certs-902161 --ip 192.168.76.2 --volume embed-certs-902161:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:58:43.945616  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Running}}
	I1121 14:58:43.966414  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 14:58:43.989259  476123 cli_runner.go:164] Run: docker exec embed-certs-902161 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:58:44.046109  476123 oci.go:144] the created container "embed-certs-902161" has a running status.
	I1121 14:58:44.046140  476123 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa...
	I1121 14:58:45.678027  476123 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:58:45.702131  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 14:58:45.726060  476123 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:58:45.726086  476123 kic_runner.go:114] Args: [docker exec --privileged embed-certs-902161 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:58:45.766813  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 14:58:45.788348  476123 machine.go:94] provisionDockerMachine start ...
	I1121 14:58:45.788537  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:45.810143  476123 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:45.810474  476123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1121 14:58:45.810485  476123 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:58:45.975940  476123 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-902161
	
	I1121 14:58:45.975962  476123 ubuntu.go:182] provisioning hostname "embed-certs-902161"
	I1121 14:58:45.976025  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:45.993906  476123 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:45.994211  476123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1121 14:58:45.994227  476123 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-902161 && echo "embed-certs-902161" | sudo tee /etc/hostname
	I1121 14:58:46.159333  476123 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-902161
	
	I1121 14:58:46.159414  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:46.178055  476123 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:46.178359  476123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1121 14:58:46.178383  476123 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-902161' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-902161/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-902161' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:58:46.324700  476123 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:58:46.324723  476123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 14:58:46.324746  476123 ubuntu.go:190] setting up certificates
	I1121 14:58:46.324757  476123 provision.go:84] configureAuth start
	I1121 14:58:46.324827  476123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-902161
	I1121 14:58:46.344155  476123 provision.go:143] copyHostCerts
	I1121 14:58:46.344228  476123 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 14:58:46.344246  476123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 14:58:46.344310  476123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 14:58:46.344504  476123 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 14:58:46.344514  476123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 14:58:46.344538  476123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 14:58:46.344600  476123 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 14:58:46.344605  476123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 14:58:46.344625  476123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 14:58:46.344673  476123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.embed-certs-902161 san=[127.0.0.1 192.168.76.2 embed-certs-902161 localhost minikube]
	I1121 14:58:46.713459  476123 provision.go:177] copyRemoteCerts
	I1121 14:58:46.713580  476123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:58:46.713689  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:46.731016  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:58:46.836701  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:58:46.861849  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:58:46.887024  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:58:46.910909  476123 provision.go:87] duration metric: took 586.128191ms to configureAuth
	I1121 14:58:46.910938  476123 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:58:46.911110  476123 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:58:46.911223  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:46.931978  476123 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:46.932454  476123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1121 14:58:46.932507  476123 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:58:47.284317  476123 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:58:47.284338  476123 machine.go:97] duration metric: took 1.495972045s to provisionDockerMachine
	I1121 14:58:47.284364  476123 client.go:176] duration metric: took 9.276136491s to LocalClient.Create
	I1121 14:58:47.284378  476123 start.go:167] duration metric: took 9.276204742s to libmachine.API.Create "embed-certs-902161"
	I1121 14:58:47.284417  476123 start.go:293] postStartSetup for "embed-certs-902161" (driver="docker")
	I1121 14:58:47.284427  476123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:58:47.284485  476123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:58:47.284522  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:47.307837  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:58:47.430114  476123 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:58:47.436681  476123 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:58:47.436759  476123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:58:47.436783  476123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 14:58:47.436875  476123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 14:58:47.436996  476123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 14:58:47.437139  476123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:58:47.449692  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:58:47.480133  476123 start.go:296] duration metric: took 195.701543ms for postStartSetup
	I1121 14:58:47.480603  476123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-902161
	I1121 14:58:47.497174  476123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/config.json ...
	I1121 14:58:47.497466  476123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:58:47.497512  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:47.518276  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:58:47.621549  476123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:58:47.626665  476123 start.go:128] duration metric: took 9.622427094s to createHost
	I1121 14:58:47.626686  476123 start.go:83] releasing machines lock for "embed-certs-902161", held for 9.622551707s
	I1121 14:58:47.626753  476123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-902161
	I1121 14:58:47.646364  476123 ssh_runner.go:195] Run: cat /version.json
	I1121 14:58:47.646415  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:47.646631  476123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:58:47.646700  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:58:47.673804  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:58:47.684495  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:58:44.580029  476289 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-844780
	
	I1121 14:58:44.580081  476289 ubuntu.go:182] provisioning hostname "no-preload-844780"
	I1121 14:58:44.580176  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:44.623720  476289 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:44.624024  476289 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1121 14:58:44.624035  476289 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-844780 && echo "no-preload-844780" | sudo tee /etc/hostname
	I1121 14:58:44.873351  476289 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-844780
	
	I1121 14:58:44.873708  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:44.940638  476289 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:44.940940  476289 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1121 14:58:44.940956  476289 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-844780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-844780/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-844780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:58:45.127624  476289 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:58:45.127675  476289 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 14:58:45.127720  476289 ubuntu.go:190] setting up certificates
	I1121 14:58:45.127734  476289 provision.go:84] configureAuth start
	I1121 14:58:45.127809  476289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-844780
	I1121 14:58:45.161683  476289 provision.go:143] copyHostCerts
	I1121 14:58:45.161759  476289 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 14:58:45.161769  476289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 14:58:45.161870  476289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 14:58:45.161983  476289 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 14:58:45.161990  476289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 14:58:45.162018  476289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 14:58:45.162076  476289 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 14:58:45.162082  476289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 14:58:45.162106  476289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 14:58:45.162187  476289 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.no-preload-844780 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-844780]
	I1121 14:58:45.642820  476289 provision.go:177] copyRemoteCerts
	I1121 14:58:45.642902  476289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:58:45.642955  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:45.667329  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:58:45.775943  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:58:45.795913  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:58:45.818416  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:58:45.839708  476289 provision.go:87] duration metric: took 711.946003ms to configureAuth
	I1121 14:58:45.839736  476289 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:58:45.839939  476289 config.go:182] Loaded profile config "no-preload-844780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:58:45.840056  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:45.863903  476289 main.go:143] libmachine: Using SSH client type: native
	I1121 14:58:45.864217  476289 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1121 14:58:45.864233  476289 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:58:46.258089  476289 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:58:46.258128  476289 machine.go:97] duration metric: took 4.942523595s to provisionDockerMachine
	I1121 14:58:46.258140  476289 client.go:176] duration metric: took 7.785298917s to LocalClient.Create
	I1121 14:58:46.258163  476289 start.go:167] duration metric: took 7.785404453s to libmachine.API.Create "no-preload-844780"
	I1121 14:58:46.258171  476289 start.go:293] postStartSetup for "no-preload-844780" (driver="docker")
	I1121 14:58:46.258187  476289 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:58:46.258278  476289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:58:46.258318  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:46.278676  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:58:46.383109  476289 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:58:46.387454  476289 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:58:46.387479  476289 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:58:46.387490  476289 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 14:58:46.387557  476289 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 14:58:46.387639  476289 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 14:58:46.387745  476289 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:58:46.400866  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:58:46.426748  476289 start.go:296] duration metric: took 168.553116ms for postStartSetup
	I1121 14:58:46.427127  476289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-844780
	I1121 14:58:46.448113  476289 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/config.json ...
	I1121 14:58:46.448574  476289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:58:46.448621  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:46.472967  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:58:46.571133  476289 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:58:46.577531  476289 start.go:128] duration metric: took 8.111446806s to createHost
	I1121 14:58:46.577558  476289 start.go:83] releasing machines lock for "no-preload-844780", held for 8.111748668s
	I1121 14:58:46.577707  476289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-844780
	I1121 14:58:46.600378  476289 ssh_runner.go:195] Run: cat /version.json
	I1121 14:58:46.600430  476289 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:58:46.600477  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:46.600498  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:58:46.627408  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:58:46.644618  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:58:46.846602  476289 ssh_runner.go:195] Run: systemctl --version
	I1121 14:58:46.853762  476289 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:58:46.899244  476289 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:58:46.904492  476289 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:58:46.904566  476289 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:58:46.942413  476289 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 14:58:46.942433  476289 start.go:496] detecting cgroup driver to use...
	I1121 14:58:46.942468  476289 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:58:46.942531  476289 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:58:46.963041  476289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:58:46.980548  476289 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:58:46.980609  476289 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:58:47.005075  476289 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:58:47.034603  476289 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:58:47.184868  476289 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:58:47.339560  476289 docker.go:234] disabling docker service ...
	I1121 14:58:47.339627  476289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:58:47.362647  476289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:58:47.379729  476289 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:58:47.549236  476289 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:58:47.692937  476289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:58:47.721182  476289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:58:47.741879  476289 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:58:47.741948  476289 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.751089  476289 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 14:58:47.751159  476289 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.760669  476289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.769543  476289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.779636  476289 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:58:47.790283  476289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.800561  476289 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.820545  476289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:47.835349  476289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:58:47.844280  476289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:58:47.853260  476289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:58:48.013867  476289 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:58:48.225230  476289 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:58:48.225298  476289 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:58:48.233079  476289 start.go:564] Will wait 60s for crictl version
	I1121 14:58:48.233193  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:48.237929  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:58:48.268536  476289 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:58:48.268618  476289 ssh_runner.go:195] Run: crio --version
	I1121 14:58:48.318347  476289 ssh_runner.go:195] Run: crio --version
	I1121 14:58:48.371421  476289 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:58:47.880127  476123 ssh_runner.go:195] Run: systemctl --version
	I1121 14:58:47.887817  476123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:58:47.949809  476123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:58:47.955129  476123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:58:47.955204  476123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:58:47.993073  476123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 14:58:47.993099  476123 start.go:496] detecting cgroup driver to use...
	I1121 14:58:47.993132  476123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:58:47.993185  476123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:58:48.020241  476123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:58:48.038520  476123 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:58:48.038668  476123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:58:48.062828  476123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:58:48.084079  476123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:58:48.248471  476123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:58:48.411485  476123 docker.go:234] disabling docker service ...
	I1121 14:58:48.411551  476123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:58:48.437895  476123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:58:48.454575  476123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:58:48.582268  476123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:58:48.712241  476123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:58:48.725805  476123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:58:48.741181  476123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:58:48.741271  476123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.756986  476123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 14:58:48.757063  476123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.774560  476123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.789680  476123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.798690  476123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:58:48.809184  476123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.823064  476123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.846000  476123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:58:48.858546  476123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:58:48.870278  476123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:58:48.882343  476123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:58:49.073044  476123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:58:49.308283  476123 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:58:49.308363  476123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:58:49.313116  476123 start.go:564] Will wait 60s for crictl version
	I1121 14:58:49.313182  476123 ssh_runner.go:195] Run: which crictl
	I1121 14:58:49.324979  476123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:58:49.381216  476123 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:58:49.381320  476123 ssh_runner.go:195] Run: crio --version
	I1121 14:58:49.423912  476123 ssh_runner.go:195] Run: crio --version
	I1121 14:58:49.476504  476123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:58:49.479423  476123 cli_runner.go:164] Run: docker network inspect embed-certs-902161 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:58:49.501765  476123 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 14:58:49.506731  476123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:58:49.523227  476123 kubeadm.go:884] updating cluster {Name:embed-certs-902161 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:58:49.523340  476123 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:58:49.523393  476123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:58:49.574669  476123 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:58:49.574689  476123 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:58:49.574744  476123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:58:49.616766  476123 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:58:49.616787  476123 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:58:49.616795  476123 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1121 14:58:49.616876  476123 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-902161 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:58:49.616953  476123 ssh_runner.go:195] Run: crio config
	I1121 14:58:49.698694  476123 cni.go:84] Creating CNI manager for ""
	I1121 14:58:49.698842  476123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:58:49.698876  476123 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:58:49.698943  476123 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-902161 NodeName:embed-certs-902161 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:58:49.699105  476123 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-902161"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:58:49.699222  476123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:58:49.710761  476123 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:58:49.710834  476123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:58:49.721171  476123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1121 14:58:49.742282  476123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:58:49.767374  476123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1121 14:58:49.791815  476123 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:58:49.798158  476123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:58:49.815650  476123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:58:50.050744  476123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:58:50.089099  476123 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161 for IP: 192.168.76.2
	I1121 14:58:50.089120  476123 certs.go:195] generating shared ca certs ...
	I1121 14:58:50.089137  476123 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:50.089290  476123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 14:58:50.089333  476123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 14:58:50.089349  476123 certs.go:257] generating profile certs ...
	I1121 14:58:50.089410  476123 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.key
	I1121 14:58:50.089421  476123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.crt with IP's: []
	I1121 14:58:50.703976  476123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.crt ...
	I1121 14:58:50.704006  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.crt: {Name:mkec47ccc9c9ed88a1dce4f3a33a8315759141f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:50.704169  476123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.key ...
	I1121 14:58:50.704183  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.key: {Name:mk5eac5c4edceca70c60b5ca0e05d68ada8c79b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:50.704264  476123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key.5d5840b9
	I1121 14:58:50.704281  476123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt.5d5840b9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1121 14:58:51.073201  476123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt.5d5840b9 ...
	I1121 14:58:51.073276  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt.5d5840b9: {Name:mk7b591abb181c69a197ce4593beda8951c37712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:51.073486  476123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key.5d5840b9 ...
	I1121 14:58:51.073522  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key.5d5840b9: {Name:mk64cb9c0bfc236340e6def13d1152f902db06d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:51.073651  476123 certs.go:382] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt.5d5840b9 -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt
	I1121 14:58:51.073778  476123 certs.go:386] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key.5d5840b9 -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key
	I1121 14:58:51.073866  476123 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.key
	I1121 14:58:51.073916  476123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.crt with IP's: []
	I1121 14:58:51.307432  476123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.crt ...
	I1121 14:58:51.308255  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.crt: {Name:mk46d078b26aae6798e6e49bc7315f6b0421a7bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:51.308521  476123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.key ...
	I1121 14:58:51.308564  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.key: {Name:mkd909ebe6563121d3e64a35c4b80b17befbc483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:58:51.308837  476123 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 14:58:51.308907  476123 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 14:58:51.308933  476123 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:58:51.309010  476123 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:58:51.309056  476123 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:58:51.309111  476123 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 14:58:51.309185  476123 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:58:51.309803  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:58:51.330192  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:58:51.349866  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:58:51.372500  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:58:51.393347  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1121 14:58:51.413256  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:58:51.432548  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:58:51.451901  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:58:51.471693  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:58:51.491516  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 14:58:51.511554  476123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 14:58:51.531716  476123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:58:51.546186  476123 ssh_runner.go:195] Run: openssl version
	I1121 14:58:51.552865  476123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:58:51.562561  476123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:58:51.566919  476123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:58:51.566998  476123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:58:51.610054  476123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:58:51.618685  476123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 14:58:51.626835  476123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 14:58:51.631098  476123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 14:58:51.631168  476123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 14:58:51.676351  476123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 14:58:51.684703  476123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 14:58:51.692869  476123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 14:58:51.697276  476123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 14:58:51.697341  476123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 14:58:51.763882  476123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:58:51.786221  476123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:58:51.790934  476123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:58:51.790998  476123 kubeadm.go:401] StartCluster: {Name:embed-certs-902161 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:58:51.791072  476123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:58:51.791133  476123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:58:51.849130  476123 cri.go:89] found id: ""
	I1121 14:58:51.849221  476123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:58:51.861018  476123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:58:51.869358  476123 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:58:51.869433  476123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:58:51.882122  476123 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:58:51.882154  476123 kubeadm.go:158] found existing configuration files:
	
	I1121 14:58:51.882213  476123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:58:51.894889  476123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:58:51.894986  476123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:58:51.902528  476123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:58:51.915068  476123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:58:51.915146  476123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:58:51.926210  476123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:58:51.939155  476123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:58:51.939257  476123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:58:51.950129  476123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:58:51.963221  476123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:58:51.963294  476123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:58:51.974199  476123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:58:52.031991  476123 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:58:52.033251  476123 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:58:52.068052  476123 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:58:52.068147  476123 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 14:58:52.068198  476123 kubeadm.go:319] OS: Linux
	I1121 14:58:52.068261  476123 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:58:52.068325  476123 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 14:58:52.068404  476123 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:58:52.068460  476123 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:58:52.068527  476123 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:58:52.068591  476123 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:58:52.068669  476123 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:58:52.068736  476123 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:58:52.068800  476123 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 14:58:52.154694  476123 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:58:52.154838  476123 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:58:52.154942  476123 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:58:52.168773  476123 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:58:52.174748  476123 out.go:252]   - Generating certificates and keys ...
	I1121 14:58:52.174853  476123 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:58:52.174934  476123 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:58:52.576193  476123 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:58:48.374731  476289 cli_runner.go:164] Run: docker network inspect no-preload-844780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:58:48.389954  476289 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:58:48.394227  476289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:58:48.404976  476289 kubeadm.go:884] updating cluster {Name:no-preload-844780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-844780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:58:48.405086  476289 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:58:48.405139  476289 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:58:48.443442  476289 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1121 14:58:48.443465  476289 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1121 14:58:48.443500  476289 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:48.443698  476289 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:48.443782  476289 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:48.443855  476289 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:48.443927  476289 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:48.444000  476289 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1121 14:58:48.444077  476289 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:48.444155  476289 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:48.447143  476289 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:48.447502  476289 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:48.447672  476289 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1121 14:58:48.447805  476289 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:48.447922  476289 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:48.448037  476289 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:48.448152  476289 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:48.448341  476289 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:48.743262  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:48.743891  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:48.748979  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:48.768190  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:48.768745  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:48.783973  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1121 14:58:48.842142  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:48.853558  476289 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1121 14:58:48.853679  476289 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:48.853756  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:48.903815  476289 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1121 14:58:48.903883  476289 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:48.903946  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:48.926862  476289 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1121 14:58:48.926966  476289 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:48.927045  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:49.013557  476289 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1121 14:58:49.013614  476289 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:49.013669  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:49.013731  476289 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1121 14:58:49.013764  476289 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:49.013794  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:49.031073  476289 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1121 14:58:49.031161  476289 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:49.031242  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:49.031357  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:49.031450  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:49.031563  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:49.031665  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:49.031774  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:49.031810  476289 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1121 14:58:49.031888  476289 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1121 14:58:49.031940  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:49.146152  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:49.146358  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:49.146255  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:49.146320  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:49.146502  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:49.146563  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:49.146567  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:58:49.313958  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:58:49.314040  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:58:49.314091  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:58:49.314158  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:49.314214  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:58:49.314281  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:58:49.314341  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:58:49.411445  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:58:49.411551  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:58:49.411651  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:58:49.470687  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:58:49.470788  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:58:49.470841  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:58:49.470891  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:58:49.470946  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:58:49.470994  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:58:49.471050  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:58:49.471096  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:58:49.471143  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:58:49.516247  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1121 14:58:49.516342  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1121 14:58:49.516436  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1121 14:58:49.516450  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1121 14:58:49.565716  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1121 14:58:49.565749  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1121 14:58:49.565807  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1121 14:58:49.565817  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1121 14:58:49.565853  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1121 14:58:49.565864  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1121 14:58:49.565914  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:58:49.565996  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:58:49.566033  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1121 14:58:49.566043  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1121 14:58:49.566080  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1121 14:58:49.566089  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1121 14:58:49.644760  476289 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1121 14:58:49.644923  476289 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:49.657697  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1121 14:58:49.657734  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1121 14:58:49.700420  476289 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1121 14:58:49.700480  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1121 14:58:49.971387  476289 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1121 14:58:49.971483  476289 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:49.971568  476289 ssh_runner.go:195] Run: which crictl
	I1121 14:58:50.376807  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:50.376865  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1121 14:58:50.486483  476289 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:58:50.486553  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:58:50.580724  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:53.605468  476123 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:58:54.177502  476123 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:58:55.084671  476123 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:58:56.424188  476123 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:58:56.424631  476123 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-902161 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:58:56.602968  476123 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:58:56.603560  476123 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-902161 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:58:57.594687  476123 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:58:53.313063  476289 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (2.826485802s)
	I1121 14:58:53.313090  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1121 14:58:53.313107  476289 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:58:53.313157  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:58:53.313224  476289 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.73247539s)
	I1121 14:58:53.313261  476289 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:58:54.473693  476289 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.160496275s)
	I1121 14:58:54.473727  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1121 14:58:54.473753  476289 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:58:54.473806  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:58:54.473888  476289 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.16061544s)
	I1121 14:58:54.473917  476289 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1121 14:58:54.473993  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:58:56.920874  476289 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.446857171s)
	I1121 14:58:56.920907  476289 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1121 14:58:56.920931  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1121 14:58:56.921052  476289 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.447229565s)
	I1121 14:58:56.921067  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1121 14:58:56.921084  476289 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:58:56.921129  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:58:58.400627  476123 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:58:58.545520  476123 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:58:58.546029  476123 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:58:58.990478  476123 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:58:59.464735  476123 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:59:00.080480  476123 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:59:00.588324  476123 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:59:00.828782  476123 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:59:00.828881  476123 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:59:00.830380  476123 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:59:00.834147  476123 out.go:252]   - Booting up control plane ...
	I1121 14:59:00.834249  476123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:59:00.834327  476123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:59:00.835328  476123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:59:00.855008  476123 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:59:00.855279  476123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:59:00.864082  476123 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:59:00.864685  476123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:59:00.864860  476123 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:59:01.040815  476123 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:59:01.040944  476123 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:58:58.965580  476289 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.044425306s)
	I1121 14:58:58.965615  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1121 14:58:58.965634  476289 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:58:58.965681  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:59:00.772630  476289 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.806920899s)
	I1121 14:59:00.772658  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1121 14:59:00.772677  476289 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:59:00.772727  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:59:03.039382  476123 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001685187s
	I1121 14:59:03.046063  476123 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:59:03.046177  476123 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1121 14:59:03.046277  476123 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:59:03.046426  476123 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:59:06.151434  476289 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.378681469s)
	I1121 14:59:06.151465  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1121 14:59:06.151482  476289 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:59:06.151530  476289 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:59:07.104611  476289 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-289204/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1121 14:59:07.104649  476289 cache_images.go:125] Successfully loaded all cached images
	I1121 14:59:07.104656  476289 cache_images.go:94] duration metric: took 18.661179097s to LoadCachedImages
	I1121 14:59:07.104667  476289 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:59:07.104767  476289 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-844780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-844780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:59:07.104858  476289 ssh_runner.go:195] Run: crio config
	I1121 14:59:07.232415  476289 cni.go:84] Creating CNI manager for ""
	I1121 14:59:07.232439  476289 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:59:07.232456  476289 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:59:07.232488  476289 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-844780 NodeName:no-preload-844780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:59:07.232679  476289 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-844780"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:59:07.232787  476289 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:59:07.246212  476289 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1121 14:59:07.246327  476289 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1121 14:59:07.258833  476289 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1121 14:59:07.258908  476289 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1121 14:59:07.259120  476289 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1121 14:59:07.259250  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1121 14:59:07.264475  476289 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1121 14:59:07.264510  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1121 14:59:08.208719  476289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:59:08.249807  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1121 14:59:08.260918  476289 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1121 14:59:08.261000  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1121 14:59:08.302837  476289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1121 14:59:08.326209  476289 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1121 14:59:08.326299  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1121 14:59:09.106512  476289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:59:09.125361  476289 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1121 14:59:09.143760  476289 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:59:09.172792  476289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1121 14:59:09.202075  476289 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:59:09.209035  476289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:59:09.218830  476289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:59:09.433889  476289 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:59:09.476900  476289 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780 for IP: 192.168.85.2
	I1121 14:59:09.476979  476289 certs.go:195] generating shared ca certs ...
	I1121 14:59:09.477009  476289 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:09.477255  476289 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 14:59:09.477336  476289 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 14:59:09.477379  476289 certs.go:257] generating profile certs ...
	I1121 14:59:09.477469  476289 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.key
	I1121 14:59:09.477501  476289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt with IP's: []
	I1121 14:59:10.240174  476289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt ...
	I1121 14:59:10.240248  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: {Name:mk99392db7bd9e10b58b67eae89522f76d5a1e9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:10.240497  476289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.key ...
	I1121 14:59:10.240532  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.key: {Name:mkedb7c7d0e8e15c68374b08b4b459f1f84322bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:10.240670  476289 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.key.88a7d8ce
	I1121 14:59:10.240708  476289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.crt.88a7d8ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:59:11.123064  476289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.crt.88a7d8ce ...
	I1121 14:59:11.123137  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.crt.88a7d8ce: {Name:mkb75663ef02700dcf7aa1a0f7f0156ca6cd7899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:11.123376  476289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.key.88a7d8ce ...
	I1121 14:59:11.123421  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.key.88a7d8ce: {Name:mkeeae9f7eabc6f3797a27e7f6a3df0ac08eb05a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:11.123559  476289 certs.go:382] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.crt.88a7d8ce -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.crt
	I1121 14:59:11.123684  476289 certs.go:386] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.key.88a7d8ce -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.key
	I1121 14:59:11.123792  476289 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.key
	I1121 14:59:11.123835  476289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.crt with IP's: []
	I1121 14:59:11.402320  476289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.crt ...
	I1121 14:59:11.402391  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.crt: {Name:mk8170063ba1905f83463bd74dcbccbe2033ce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:11.402596  476289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.key ...
	I1121 14:59:11.402631  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.key: {Name:mk32751d5649809e0c1a634c68c9138f872a4276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:11.402933  476289 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 14:59:11.403000  476289 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 14:59:11.403026  476289 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:59:11.403084  476289 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:59:11.403134  476289 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:59:11.403184  476289 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 14:59:11.403252  476289 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 14:59:11.403838  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:59:11.435880  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:59:11.475203  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:59:11.509821  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:59:11.530503  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:59:11.552667  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:59:11.574739  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:59:11.607121  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:59:11.627228  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 14:59:11.650292  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:59:11.671405  476289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 14:59:11.690275  476289 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:59:11.703539  476289 ssh_runner.go:195] Run: openssl version
	I1121 14:59:11.710300  476289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:59:11.718760  476289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:59:11.724151  476289 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:59:11.724250  476289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:59:11.767112  476289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:59:11.776089  476289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 14:59:11.785919  476289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 14:59:11.790649  476289 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 14:59:11.790767  476289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 14:59:11.835162  476289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 14:59:11.845247  476289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 14:59:11.854623  476289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 14:59:11.859205  476289 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 14:59:11.859314  476289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 14:59:11.903829  476289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:59:11.913268  476289 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:59:11.917884  476289 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:59:11.917991  476289 kubeadm.go:401] StartCluster: {Name:no-preload-844780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-844780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:59:11.918082  476289 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:59:11.918156  476289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:59:11.948950  476289 cri.go:89] found id: ""
	I1121 14:59:11.949058  476289 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:59:11.959434  476289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:59:11.972765  476289 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:59:11.972881  476289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:59:12.005917  476289 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:59:12.005940  476289 kubeadm.go:158] found existing configuration files:
	
	I1121 14:59:12.006039  476289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:59:12.031231  476289 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:59:12.031348  476289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:59:12.050150  476289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:59:12.069282  476289 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:59:12.069400  476289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:59:12.078750  476289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:59:12.088923  476289 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:59:12.089041  476289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:59:12.097523  476289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:59:12.106790  476289 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:59:12.106903  476289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:59:12.115281  476289 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:59:12.163280  476289 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:59:12.163792  476289 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:59:12.205037  476289 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:59:12.205349  476289 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 14:59:12.205437  476289 kubeadm.go:319] OS: Linux
	I1121 14:59:12.205528  476289 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:59:12.205611  476289 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 14:59:12.205717  476289 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:59:12.205790  476289 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:59:12.205874  476289 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:59:12.205953  476289 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:59:12.206025  476289 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:59:12.206112  476289 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:59:12.206199  476289 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 14:59:12.294846  476289 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:59:12.295025  476289 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:59:12.295147  476289 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:59:12.316724  476289 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:59:10.845757  476123 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.800601827s
	I1121 14:59:11.508710  476123 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 8.463083675s
	I1121 14:59:12.322013  476289 out.go:252]   - Generating certificates and keys ...
	I1121 14:59:12.322172  476289 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:59:12.322281  476289 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:59:12.762910  476289 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:59:13.048274  476123 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.00295778s
	I1121 14:59:13.082253  476123 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:59:13.103124  476123 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:59:13.125298  476123 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:59:13.125795  476123 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-902161 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:59:13.141070  476123 kubeadm.go:319] [bootstrap-token] Using token: rephq1.20w5hkzrb35aw52v
	I1121 14:59:13.143927  476123 out.go:252]   - Configuring RBAC rules ...
	I1121 14:59:13.144053  476123 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:59:13.149965  476123 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:59:13.162505  476123 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:59:13.168771  476123 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:59:13.174433  476123 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:59:13.182093  476123 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:59:13.455866  476123 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:59:14.014297  476123 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:59:14.455995  476123 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:59:14.457566  476123 kubeadm.go:319] 
	I1121 14:59:14.457665  476123 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:59:14.457672  476123 kubeadm.go:319] 
	I1121 14:59:14.457752  476123 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:59:14.457757  476123 kubeadm.go:319] 
	I1121 14:59:14.457783  476123 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:59:14.458273  476123 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:59:14.458340  476123 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:59:14.458346  476123 kubeadm.go:319] 
	I1121 14:59:14.458403  476123 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:59:14.458408  476123 kubeadm.go:319] 
	I1121 14:59:14.458457  476123 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:59:14.458461  476123 kubeadm.go:319] 
	I1121 14:59:14.458516  476123 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:59:14.458594  476123 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:59:14.458665  476123 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:59:14.458670  476123 kubeadm.go:319] 
	I1121 14:59:14.458986  476123 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:59:14.459072  476123 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:59:14.459077  476123 kubeadm.go:319] 
	I1121 14:59:14.459385  476123 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rephq1.20w5hkzrb35aw52v \
	I1121 14:59:14.459499  476123 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 \
	I1121 14:59:14.459744  476123 kubeadm.go:319] 	--control-plane 
	I1121 14:59:14.459754  476123 kubeadm.go:319] 
	I1121 14:59:14.460034  476123 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:59:14.460045  476123 kubeadm.go:319] 
	I1121 14:59:14.460345  476123 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rephq1.20w5hkzrb35aw52v \
	I1121 14:59:14.460684  476123 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 
	I1121 14:59:14.466023  476123 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 14:59:14.466384  476123 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 14:59:14.466554  476123 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:59:14.466591  476123 cni.go:84] Creating CNI manager for ""
	I1121 14:59:14.466628  476123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:59:14.471941  476123 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:59:14.474872  476123 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:59:14.486745  476123 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:59:14.486764  476123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:59:14.515532  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:59:15.008988  476123 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:59:15.009178  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:15.009278  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-902161 minikube.k8s.io/updated_at=2025_11_21T14_59_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=embed-certs-902161 minikube.k8s.io/primary=true
	I1121 14:59:15.307081  476123 ops.go:34] apiserver oom_adj: -16
	I1121 14:59:15.307187  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:15.808254  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:16.307315  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:16.808123  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:17.308192  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:13.307110  476289 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:59:13.399361  476289 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:59:14.083511  476289 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:59:15.191567  476289 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:59:15.191713  476289 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-844780] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:59:15.929787  476289 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:59:15.930334  476289 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-844780] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:59:16.524559  476289 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:59:16.703071  476289 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:59:17.007209  476289 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:59:17.007563  476289 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:59:17.606868  476289 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:59:17.899081  476289 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:59:17.808184  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:18.307855  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:18.807521  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:19.307617  476123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:19.593516  476123 kubeadm.go:1114] duration metric: took 4.584414734s to wait for elevateKubeSystemPrivileges
	I1121 14:59:19.593547  476123 kubeadm.go:403] duration metric: took 27.802553781s to StartCluster
	I1121 14:59:19.593565  476123 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:19.593624  476123 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:59:19.594619  476123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:19.594850  476123 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:59:19.594966  476123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:59:19.595216  476123 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:59:19.595259  476123 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:59:19.595325  476123 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-902161"
	I1121 14:59:19.595340  476123 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-902161"
	I1121 14:59:19.595365  476123 host.go:66] Checking if "embed-certs-902161" exists ...
	I1121 14:59:19.595886  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 14:59:19.596354  476123 addons.go:70] Setting default-storageclass=true in profile "embed-certs-902161"
	I1121 14:59:19.596377  476123 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-902161"
	I1121 14:59:19.596675  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 14:59:19.598436  476123 out.go:179] * Verifying Kubernetes components...
	I1121 14:59:19.602045  476123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:59:19.639511  476123 addons.go:239] Setting addon default-storageclass=true in "embed-certs-902161"
	I1121 14:59:19.639554  476123 host.go:66] Checking if "embed-certs-902161" exists ...
	I1121 14:59:19.639722  476123 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:59:18.728763  476289 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:59:19.532499  476289 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:59:21.050512  476289 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:59:21.067595  476289 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:59:21.069593  476289 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:59:19.639970  476123 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 14:59:19.646756  476123 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:59:19.646783  476123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:59:19.646851  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:59:19.692493  476123 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:59:19.692515  476123 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:59:19.692580  476123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 14:59:19.707810  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:59:19.728377  476123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 14:59:20.141174  476123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:59:20.162326  476123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:59:20.213879  476123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:59:20.248305  476123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:59:21.529137  476123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.38787749s)
	I1121 14:59:21.529202  476123 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.366788017s)
	I1121 14:59:21.529217  476123 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1121 14:59:21.530472  476123 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.316527291s)
	I1121 14:59:21.531290  476123 node_ready.go:35] waiting up to 6m0s for node "embed-certs-902161" to be "Ready" ...
	I1121 14:59:21.531531  476123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.283161437s)
	I1121 14:59:21.592939  476123 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:59:21.595845  476123 addons.go:530] duration metric: took 2.000565832s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:59:22.032910  476123 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-902161" context rescaled to 1 replicas
	I1121 14:59:21.073596  476289 out.go:252]   - Booting up control plane ...
	I1121 14:59:21.073723  476289 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:59:21.073806  476289 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:59:21.075288  476289 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:59:21.123563  476289 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:59:21.123677  476289 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:59:21.131229  476289 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:59:21.131575  476289 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:59:21.131625  476289 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:59:21.324901  476289 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:59:21.325027  476289 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:59:22.825480  476289 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501578452s
	I1121 14:59:22.829119  476289 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:59:22.829220  476289 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1121 14:59:22.829320  476289 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:59:22.830020  476289 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1121 14:59:23.535065  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:26.035000  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	I1121 14:59:26.792198  476289 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.962302785s
	I1121 14:59:28.528851  476289 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.698977576s
	I1121 14:59:29.332013  476289 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502659173s
	I1121 14:59:29.360935  476289 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:59:29.375366  476289 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:59:29.389589  476289 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:59:29.389812  476289 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-844780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:59:29.402078  476289 kubeadm.go:319] [bootstrap-token] Using token: djj5gi.szbg9jrs40jfwzmo
	I1121 14:59:29.405074  476289 out.go:252]   - Configuring RBAC rules ...
	I1121 14:59:29.405206  476289 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:59:29.409190  476289 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:59:29.421994  476289 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:59:29.426771  476289 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:59:29.431448  476289 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:59:29.435746  476289 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:59:29.739403  476289 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:59:30.194260  476289 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:59:30.738826  476289 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:59:30.739944  476289 kubeadm.go:319] 
	I1121 14:59:30.740022  476289 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:59:30.740036  476289 kubeadm.go:319] 
	I1121 14:59:30.740122  476289 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:59:30.740131  476289 kubeadm.go:319] 
	I1121 14:59:30.740158  476289 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:59:30.740223  476289 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:59:30.740279  476289 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:59:30.740287  476289 kubeadm.go:319] 
	I1121 14:59:30.740345  476289 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:59:30.740353  476289 kubeadm.go:319] 
	I1121 14:59:30.740428  476289 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:59:30.740439  476289 kubeadm.go:319] 
	I1121 14:59:30.740495  476289 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:59:30.740579  476289 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:59:30.740655  476289 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:59:30.740663  476289 kubeadm.go:319] 
	I1121 14:59:30.740752  476289 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:59:30.740838  476289 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:59:30.740845  476289 kubeadm.go:319] 
	I1121 14:59:30.740932  476289 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token djj5gi.szbg9jrs40jfwzmo \
	I1121 14:59:30.741044  476289 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 \
	I1121 14:59:30.741071  476289 kubeadm.go:319] 	--control-plane 
	I1121 14:59:30.741080  476289 kubeadm.go:319] 
	I1121 14:59:30.741169  476289 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:59:30.741178  476289 kubeadm.go:319] 
	I1121 14:59:30.741264  476289 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token djj5gi.szbg9jrs40jfwzmo \
	I1121 14:59:30.741375  476289 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 
	I1121 14:59:30.744656  476289 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 14:59:30.744895  476289 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 14:59:30.745011  476289 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:59:30.745032  476289 cni.go:84] Creating CNI manager for ""
	I1121 14:59:30.745040  476289 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:59:30.748373  476289 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1121 14:59:28.534781  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:31.034734  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	I1121 14:59:30.751367  476289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:59:30.755567  476289 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:59:30.755586  476289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:59:30.770481  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:59:31.080468  476289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:59:31.080536  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:31.080621  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-844780 minikube.k8s.io/updated_at=2025_11_21T14_59_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=no-preload-844780 minikube.k8s.io/primary=true
	I1121 14:59:31.107263  476289 ops.go:34] apiserver oom_adj: -16
	I1121 14:59:31.252489  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:31.752594  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:32.253127  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:32.752535  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:33.252546  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:33.752509  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:34.252594  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:34.752829  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:35.253044  476289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:59:35.392107  476289 kubeadm.go:1114] duration metric: took 4.311629625s to wait for elevateKubeSystemPrivileges
	I1121 14:59:35.392140  476289 kubeadm.go:403] duration metric: took 23.474154301s to StartCluster
	I1121 14:59:35.392157  476289 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:35.392220  476289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:59:35.393813  476289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:59:35.394242  476289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:59:35.394551  476289 config.go:182] Loaded profile config "no-preload-844780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:59:35.394613  476289 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:59:35.394687  476289 addons.go:70] Setting storage-provisioner=true in profile "no-preload-844780"
	I1121 14:59:35.394702  476289 addons.go:239] Setting addon storage-provisioner=true in "no-preload-844780"
	I1121 14:59:35.394726  476289 host.go:66] Checking if "no-preload-844780" exists ...
	I1121 14:59:35.395223  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 14:59:35.395384  476289 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:59:35.395799  476289 addons.go:70] Setting default-storageclass=true in profile "no-preload-844780"
	I1121 14:59:35.395822  476289 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-844780"
	I1121 14:59:35.396101  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 14:59:35.398725  476289 out.go:179] * Verifying Kubernetes components...
	I1121 14:59:35.403918  476289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:59:35.436646  476289 addons.go:239] Setting addon default-storageclass=true in "no-preload-844780"
	I1121 14:59:35.436685  476289 host.go:66] Checking if "no-preload-844780" exists ...
	I1121 14:59:35.437798  476289 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 14:59:35.439933  476289 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:59:35.443174  476289 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:59:35.443196  476289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:59:35.443263  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:59:35.479440  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:59:35.481314  476289 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:59:35.481334  476289 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:59:35.481407  476289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 14:59:35.513660  476289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 14:59:35.685134  476289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:59:35.723111  476289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:59:35.753823  476289 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:59:35.799789  476289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:59:36.374464  476289 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 14:59:36.832359  476289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.109211431s)
	I1121 14:59:36.832457  476289 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.078606193s)
	I1121 14:59:36.832480  476289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.032672231s)
	I1121 14:59:36.834100  476289 node_ready.go:35] waiting up to 6m0s for node "no-preload-844780" to be "Ready" ...
	I1121 14:59:36.847942  476289 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1121 14:59:33.035499  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:35.535508  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	I1121 14:59:36.850834  476289 addons.go:530] duration metric: took 1.456202682s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:59:36.878893  476289 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-844780" context rescaled to 1 replicas
	W1121 14:59:38.039737  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:40.534969  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:38.841106  476289 node_ready.go:57] node "no-preload-844780" has "Ready":"False" status (will retry)
	W1121 14:59:41.338164  476289 node_ready.go:57] node "no-preload-844780" has "Ready":"False" status (will retry)
	W1121 14:59:43.034906  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:45.041009  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:47.534505  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:43.837103  476289 node_ready.go:57] node "no-preload-844780" has "Ready":"False" status (will retry)
	W1121 14:59:45.837618  476289 node_ready.go:57] node "no-preload-844780" has "Ready":"False" status (will retry)
	W1121 14:59:47.837880  476289 node_ready.go:57] node "no-preload-844780" has "Ready":"False" status (will retry)
	I1121 14:59:49.838799  476289 node_ready.go:49] node "no-preload-844780" is "Ready"
	I1121 14:59:49.838831  476289 node_ready.go:38] duration metric: took 13.00464932s for node "no-preload-844780" to be "Ready" ...
	I1121 14:59:49.838846  476289 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:59:49.838915  476289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:59:49.855056  476289 api_server.go:72] duration metric: took 14.459635044s to wait for apiserver process to appear ...
	I1121 14:59:49.855083  476289 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:59:49.855103  476289 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:59:49.864235  476289 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:59:49.865284  476289 api_server.go:141] control plane version: v1.34.1
	I1121 14:59:49.865309  476289 api_server.go:131] duration metric: took 10.218846ms to wait for apiserver health ...
	I1121 14:59:49.865319  476289 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:59:49.869960  476289 system_pods.go:59] 8 kube-system pods found
	I1121 14:59:49.870059  476289 system_pods.go:61] "coredns-66bc5c9577-2mqjs" [96d5956d-d71f-4509-86fe-94f9c8b6832a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:59:49.870089  476289 system_pods.go:61] "etcd-no-preload-844780" [17c66826-5545-4905-9ef9-a63dc8cc8fa6] Running
	I1121 14:59:49.870179  476289 system_pods.go:61] "kindnet-whwj8" [66ed1cd4-bb39-4b0f-b52e-a4061329e72b] Running
	I1121 14:59:49.870243  476289 system_pods.go:61] "kube-apiserver-no-preload-844780" [b286018d-5cad-4c67-9c97-7853c5c9eef3] Running
	I1121 14:59:49.870318  476289 system_pods.go:61] "kube-controller-manager-no-preload-844780" [0005e01e-7c78-4ee6-a294-7a321177ed07] Running
	I1121 14:59:49.870345  476289 system_pods.go:61] "kube-proxy-2zwvg" [26e02c8a-4f48-4406-8a0c-05fc4951a8c4] Running
	I1121 14:59:49.870361  476289 system_pods.go:61] "kube-scheduler-no-preload-844780" [c5aa6f84-0262-4786-9ba4-b0149e3bc8bb] Running
	I1121 14:59:49.870420  476289 system_pods.go:61] "storage-provisioner" [01c5a82c-94b5-42d1-8159-096f9fdca84a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:59:49.870459  476289 system_pods.go:74] duration metric: took 5.134371ms to wait for pod list to return data ...
	I1121 14:59:49.870485  476289 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:59:49.874699  476289 default_sa.go:45] found service account: "default"
	I1121 14:59:49.874783  476289 default_sa.go:55] duration metric: took 4.277975ms for default service account to be created ...
	I1121 14:59:49.874808  476289 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:59:49.889528  476289 system_pods.go:86] 8 kube-system pods found
	I1121 14:59:49.889563  476289 system_pods.go:89] "coredns-66bc5c9577-2mqjs" [96d5956d-d71f-4509-86fe-94f9c8b6832a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:59:49.889571  476289 system_pods.go:89] "etcd-no-preload-844780" [17c66826-5545-4905-9ef9-a63dc8cc8fa6] Running
	I1121 14:59:49.889577  476289 system_pods.go:89] "kindnet-whwj8" [66ed1cd4-bb39-4b0f-b52e-a4061329e72b] Running
	I1121 14:59:49.889583  476289 system_pods.go:89] "kube-apiserver-no-preload-844780" [b286018d-5cad-4c67-9c97-7853c5c9eef3] Running
	I1121 14:59:49.889588  476289 system_pods.go:89] "kube-controller-manager-no-preload-844780" [0005e01e-7c78-4ee6-a294-7a321177ed07] Running
	I1121 14:59:49.889592  476289 system_pods.go:89] "kube-proxy-2zwvg" [26e02c8a-4f48-4406-8a0c-05fc4951a8c4] Running
	I1121 14:59:49.889596  476289 system_pods.go:89] "kube-scheduler-no-preload-844780" [c5aa6f84-0262-4786-9ba4-b0149e3bc8bb] Running
	I1121 14:59:49.889602  476289 system_pods.go:89] "storage-provisioner" [01c5a82c-94b5-42d1-8159-096f9fdca84a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:59:49.889636  476289 retry.go:31] will retry after 237.389598ms: missing components: kube-dns
	I1121 14:59:50.134184  476289 system_pods.go:86] 8 kube-system pods found
	I1121 14:59:50.134282  476289 system_pods.go:89] "coredns-66bc5c9577-2mqjs" [96d5956d-d71f-4509-86fe-94f9c8b6832a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:59:50.134305  476289 system_pods.go:89] "etcd-no-preload-844780" [17c66826-5545-4905-9ef9-a63dc8cc8fa6] Running
	I1121 14:59:50.134341  476289 system_pods.go:89] "kindnet-whwj8" [66ed1cd4-bb39-4b0f-b52e-a4061329e72b] Running
	I1121 14:59:50.134370  476289 system_pods.go:89] "kube-apiserver-no-preload-844780" [b286018d-5cad-4c67-9c97-7853c5c9eef3] Running
	I1121 14:59:50.134397  476289 system_pods.go:89] "kube-controller-manager-no-preload-844780" [0005e01e-7c78-4ee6-a294-7a321177ed07] Running
	I1121 14:59:50.134415  476289 system_pods.go:89] "kube-proxy-2zwvg" [26e02c8a-4f48-4406-8a0c-05fc4951a8c4] Running
	I1121 14:59:50.134443  476289 system_pods.go:89] "kube-scheduler-no-preload-844780" [c5aa6f84-0262-4786-9ba4-b0149e3bc8bb] Running
	I1121 14:59:50.134471  476289 system_pods.go:89] "storage-provisioner" [01c5a82c-94b5-42d1-8159-096f9fdca84a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:59:50.134502  476289 retry.go:31] will retry after 299.453607ms: missing components: kube-dns
	I1121 14:59:50.439861  476289 system_pods.go:86] 8 kube-system pods found
	I1121 14:59:50.439946  476289 system_pods.go:89] "coredns-66bc5c9577-2mqjs" [96d5956d-d71f-4509-86fe-94f9c8b6832a] Running
	I1121 14:59:50.439969  476289 system_pods.go:89] "etcd-no-preload-844780" [17c66826-5545-4905-9ef9-a63dc8cc8fa6] Running
	I1121 14:59:50.439986  476289 system_pods.go:89] "kindnet-whwj8" [66ed1cd4-bb39-4b0f-b52e-a4061329e72b] Running
	I1121 14:59:50.440021  476289 system_pods.go:89] "kube-apiserver-no-preload-844780" [b286018d-5cad-4c67-9c97-7853c5c9eef3] Running
	I1121 14:59:50.440045  476289 system_pods.go:89] "kube-controller-manager-no-preload-844780" [0005e01e-7c78-4ee6-a294-7a321177ed07] Running
	I1121 14:59:50.440062  476289 system_pods.go:89] "kube-proxy-2zwvg" [26e02c8a-4f48-4406-8a0c-05fc4951a8c4] Running
	I1121 14:59:50.440079  476289 system_pods.go:89] "kube-scheduler-no-preload-844780" [c5aa6f84-0262-4786-9ba4-b0149e3bc8bb] Running
	I1121 14:59:50.440107  476289 system_pods.go:89] "storage-provisioner" [01c5a82c-94b5-42d1-8159-096f9fdca84a] Running
	I1121 14:59:50.440131  476289 system_pods.go:126] duration metric: took 565.298663ms to wait for k8s-apps to be running ...
	I1121 14:59:50.440151  476289 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:59:50.440236  476289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:59:50.464899  476289 system_svc.go:56] duration metric: took 24.736818ms WaitForService to wait for kubelet
	I1121 14:59:50.464976  476289 kubeadm.go:587] duration metric: took 15.069559286s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:59:50.465026  476289 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:59:50.468694  476289 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 14:59:50.468774  476289 node_conditions.go:123] node cpu capacity is 2
	I1121 14:59:50.468801  476289 node_conditions.go:105] duration metric: took 3.742871ms to run NodePressure ...
	I1121 14:59:50.468839  476289 start.go:242] waiting for startup goroutines ...
	I1121 14:59:50.468863  476289 start.go:247] waiting for cluster config update ...
	I1121 14:59:50.468888  476289 start.go:256] writing updated cluster config ...
	I1121 14:59:50.469240  476289 ssh_runner.go:195] Run: rm -f paused
	I1121 14:59:50.473648  476289 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:59:50.478095  476289 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2mqjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.482829  476289 pod_ready.go:94] pod "coredns-66bc5c9577-2mqjs" is "Ready"
	I1121 14:59:50.482901  476289 pod_ready.go:86] duration metric: took 4.744943ms for pod "coredns-66bc5c9577-2mqjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.485258  476289 pod_ready.go:83] waiting for pod "etcd-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.489544  476289 pod_ready.go:94] pod "etcd-no-preload-844780" is "Ready"
	I1121 14:59:50.489617  476289 pod_ready.go:86] duration metric: took 4.291267ms for pod "etcd-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.491813  476289 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.495853  476289 pod_ready.go:94] pod "kube-apiserver-no-preload-844780" is "Ready"
	I1121 14:59:50.495919  476289 pod_ready.go:86] duration metric: took 4.053709ms for pod "kube-apiserver-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.498158  476289 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:50.878791  476289 pod_ready.go:94] pod "kube-controller-manager-no-preload-844780" is "Ready"
	I1121 14:59:50.878821  476289 pod_ready.go:86] duration metric: took 380.600331ms for pod "kube-controller-manager-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:51.078424  476289 pod_ready.go:83] waiting for pod "kube-proxy-2zwvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:51.477571  476289 pod_ready.go:94] pod "kube-proxy-2zwvg" is "Ready"
	I1121 14:59:51.477654  476289 pod_ready.go:86] duration metric: took 399.200609ms for pod "kube-proxy-2zwvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:51.678645  476289 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:52.078999  476289 pod_ready.go:94] pod "kube-scheduler-no-preload-844780" is "Ready"
	I1121 14:59:52.079029  476289 pod_ready.go:86] duration metric: took 400.355215ms for pod "kube-scheduler-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:59:52.079043  476289 pod_ready.go:40] duration metric: took 1.605323858s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:59:52.135323  476289 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 14:59:52.138660  476289 out.go:179] * Done! kubectl is now configured to use "no-preload-844780" cluster and "default" namespace by default
	W1121 14:59:49.534897  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:52.034528  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:54.035401  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:56.535268  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 14:59:59.034776  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	W1121 15:00:01.050734  476123 node_ready.go:57] node "embed-certs-902161" has "Ready":"False" status (will retry)
	I1121 15:00:01.537529  476123 node_ready.go:49] node "embed-certs-902161" is "Ready"
	I1121 15:00:01.537567  476123 node_ready.go:38] duration metric: took 40.006249354s for node "embed-certs-902161" to be "Ready" ...
	I1121 15:00:01.537583  476123 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:00:01.537670  476123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:00:01.666268  476123 api_server.go:72] duration metric: took 42.07137973s to wait for apiserver process to appear ...
	I1121 15:00:01.666296  476123 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:00:01.666317  476123 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:00:01.760129  476123 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 15:00:01.784777  476123 api_server.go:141] control plane version: v1.34.1
	I1121 15:00:01.784811  476123 api_server.go:131] duration metric: took 118.505232ms to wait for apiserver health ...
	I1121 15:00:01.784821  476123 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:00:01.796179  476123 system_pods.go:59] 8 kube-system pods found
	I1121 15:00:01.796227  476123 system_pods.go:61] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Pending
	I1121 15:00:01.796235  476123 system_pods.go:61] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running
	I1121 15:00:01.796242  476123 system_pods.go:61] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:01.796247  476123 system_pods.go:61] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running
	I1121 15:00:01.796252  476123 system_pods.go:61] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running
	I1121 15:00:01.796257  476123 system_pods.go:61] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:01.796262  476123 system_pods.go:61] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running
	I1121 15:00:01.796272  476123 system_pods.go:61] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:00:01.796281  476123 system_pods.go:74] duration metric: took 11.453641ms to wait for pod list to return data ...
	I1121 15:00:01.796299  476123 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:00:01.841455  476123 default_sa.go:45] found service account: "default"
	I1121 15:00:01.841497  476123 default_sa.go:55] duration metric: took 45.189606ms for default service account to be created ...
	I1121 15:00:01.841509  476123 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 15:00:01.854354  476123 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:01.854388  476123 system_pods.go:89] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:01.854402  476123 system_pods.go:89] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running
	I1121 15:00:01.854411  476123 system_pods.go:89] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:01.854434  476123 system_pods.go:89] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running
	I1121 15:00:01.854440  476123 system_pods.go:89] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running
	I1121 15:00:01.854444  476123 system_pods.go:89] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:01.854448  476123 system_pods.go:89] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running
	I1121 15:00:01.854457  476123 system_pods.go:89] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:00:01.854486  476123 retry.go:31] will retry after 292.977025ms: missing components: kube-dns
	I1121 15:00:02.240221  476123 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:02.240265  476123 system_pods.go:89] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:02.240273  476123 system_pods.go:89] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running
	I1121 15:00:02.240281  476123 system_pods.go:89] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:02.240286  476123 system_pods.go:89] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running
	I1121 15:00:02.240291  476123 system_pods.go:89] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running
	I1121 15:00:02.240296  476123 system_pods.go:89] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:02.240300  476123 system_pods.go:89] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running
	I1121 15:00:02.240307  476123 system_pods.go:89] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:00:02.240324  476123 retry.go:31] will retry after 368.124563ms: missing components: kube-dns
	I1121 15:00:02.629650  476123 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:02.629688  476123 system_pods.go:89] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:02.629697  476123 system_pods.go:89] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running
	I1121 15:00:02.629704  476123 system_pods.go:89] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:02.629710  476123 system_pods.go:89] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running
	I1121 15:00:02.629715  476123 system_pods.go:89] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running
	I1121 15:00:02.629719  476123 system_pods.go:89] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:02.629723  476123 system_pods.go:89] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running
	I1121 15:00:02.629727  476123 system_pods.go:89] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Running
	I1121 15:00:02.629743  476123 retry.go:31] will retry after 346.269936ms: missing components: kube-dns
	I1121 15:00:03.078140  476123 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:03.078177  476123 system_pods.go:89] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:03.078184  476123 system_pods.go:89] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running
	I1121 15:00:03.078191  476123 system_pods.go:89] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:03.078195  476123 system_pods.go:89] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running
	I1121 15:00:03.078200  476123 system_pods.go:89] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running
	I1121 15:00:03.078203  476123 system_pods.go:89] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:03.078207  476123 system_pods.go:89] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running
	I1121 15:00:03.078211  476123 system_pods.go:89] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Running
	I1121 15:00:03.078225  476123 retry.go:31] will retry after 426.929839ms: missing components: kube-dns
	I1121 15:00:03.514225  476123 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:03.514260  476123 system_pods.go:89] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:03.514267  476123 system_pods.go:89] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running
	I1121 15:00:03.514275  476123 system_pods.go:89] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:03.514280  476123 system_pods.go:89] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running
	I1121 15:00:03.514284  476123 system_pods.go:89] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running
	I1121 15:00:03.514289  476123 system_pods.go:89] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:03.514294  476123 system_pods.go:89] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running
	I1121 15:00:03.514297  476123 system_pods.go:89] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Running
	I1121 15:00:03.514312  476123 retry.go:31] will retry after 469.6549ms: missing components: kube-dns
	I1121 15:00:03.989401  476123 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:03.989431  476123 system_pods.go:89] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Running
	I1121 15:00:03.989439  476123 system_pods.go:89] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running
	I1121 15:00:03.989445  476123 system_pods.go:89] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:03.989449  476123 system_pods.go:89] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running
	I1121 15:00:03.989454  476123 system_pods.go:89] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running
	I1121 15:00:03.989458  476123 system_pods.go:89] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:03.989462  476123 system_pods.go:89] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running
	I1121 15:00:03.989466  476123 system_pods.go:89] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Running
	I1121 15:00:03.989473  476123 system_pods.go:126] duration metric: took 2.14795822s to wait for k8s-apps to be running ...
	I1121 15:00:03.989481  476123 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 15:00:03.989543  476123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:00:04.007887  476123 system_svc.go:56] duration metric: took 18.392718ms WaitForService to wait for kubelet
	I1121 15:00:04.007921  476123 kubeadm.go:587] duration metric: took 44.413039158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:00:04.007942  476123 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:00:04.013235  476123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:00:04.013268  476123 node_conditions.go:123] node cpu capacity is 2
	I1121 15:00:04.013283  476123 node_conditions.go:105] duration metric: took 5.335284ms to run NodePressure ...
	I1121 15:00:04.013297  476123 start.go:242] waiting for startup goroutines ...
	I1121 15:00:04.013304  476123 start.go:247] waiting for cluster config update ...
	I1121 15:00:04.013317  476123 start.go:256] writing updated cluster config ...
	I1121 15:00:04.013647  476123 ssh_runner.go:195] Run: rm -f paused
	I1121 15:00:04.021216  476123 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:00:04.026255  476123 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gttll" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:00:04.034071  476123 pod_ready.go:94] pod "coredns-66bc5c9577-gttll" is "Ready"
	I1121 15:00:04.034103  476123 pod_ready.go:86] duration metric: took 7.812909ms for pod "coredns-66bc5c9577-gttll" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:00:04.037776  476123 pod_ready.go:83] waiting for pod "etcd-embed-certs-902161" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:00:04.045098  476123 pod_ready.go:94] pod "etcd-embed-certs-902161" is "Ready"
	I1121 15:00:04.045130  476123 pod_ready.go:86] duration metric: took 7.328021ms for pod "etcd-embed-certs-902161" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:00:04.048083  476123 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-902161" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:00:04.056368  476123 pod_ready.go:94] pod "kube-apiserver-embed-certs-902161" is "Ready"
	I1121 15:00:04.056411  476123 pod_ready.go:86] duration metric: took 8.298511ms for pod "kube-apiserver-embed-certs-902161" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:00:04.062743  476123 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-902161" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:00:04.425772  476123 pod_ready.go:94] pod "kube-controller-manager-embed-certs-902161" is "Ready"
	I1121 15:00:04.425803  476123 pod_ready.go:86] duration metric: took 362.988504ms for pod "kube-controller-manager-embed-certs-902161" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:00:04.626559  476123 pod_ready.go:83] waiting for pod "kube-proxy-wkbb9" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:00:05.026739  476123 pod_ready.go:94] pod "kube-proxy-wkbb9" is "Ready"
	I1121 15:00:05.026785  476123 pod_ready.go:86] duration metric: took 400.194409ms for pod "kube-proxy-wkbb9" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:00:05.226619  476123 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-902161" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:00:05.626834  476123 pod_ready.go:94] pod "kube-scheduler-embed-certs-902161" is "Ready"
	I1121 15:00:05.626861  476123 pod_ready.go:86] duration metric: took 400.211148ms for pod "kube-scheduler-embed-certs-902161" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:00:05.626873  476123 pod_ready.go:40] duration metric: took 1.605562375s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:00:05.730486  476123 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 15:00:05.734204  476123 out.go:179] * Done! kubectl is now configured to use "embed-certs-902161" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 15:00:02 embed-certs-902161 crio[843]: time="2025-11-21T15:00:02.515872022Z" level=info msg="Created container 459c52574f0296038cc3abbea68f28b48e40ddf9ba82033cb2a71cfa4be0c653: kube-system/coredns-66bc5c9577-gttll/coredns" id=c1cc8cbb-f899-4f44-a20e-f83dc0b63742 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:00:02 embed-certs-902161 crio[843]: time="2025-11-21T15:00:02.517977951Z" level=info msg="Starting container: 459c52574f0296038cc3abbea68f28b48e40ddf9ba82033cb2a71cfa4be0c653" id=ffdcadd0-95a1-4a91-94bc-1d52f3128695 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:00:02 embed-certs-902161 crio[843]: time="2025-11-21T15:00:02.521307081Z" level=info msg="Started container" PID=1748 containerID=459c52574f0296038cc3abbea68f28b48e40ddf9ba82033cb2a71cfa4be0c653 description=kube-system/coredns-66bc5c9577-gttll/coredns id=ffdcadd0-95a1-4a91-94bc-1d52f3128695 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5592d449e6fd95e8bf26a85be8aeeb3f22264c1c96c49a545755de0d20c48829
	Nov 21 15:00:06 embed-certs-902161 crio[843]: time="2025-11-21T15:00:06.322817237Z" level=info msg="Running pod sandbox: default/busybox/POD" id=211e7936-917d-459d-82c2-18e2010e8bf3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:00:06 embed-certs-902161 crio[843]: time="2025-11-21T15:00:06.322904771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:00:06 embed-certs-902161 crio[843]: time="2025-11-21T15:00:06.338429073Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7f5a9a837c662ba9604be721e6de5dc78189b6f29d9e8cf86f3804da072dbaf7 UID:9a929447-7041-4d25-a008-51ccf9c7f5e2 NetNS:/var/run/netns/09df3e16-dc1d-4753-85eb-e4940b2b3e2a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000460508}] Aliases:map[]}"
	Nov 21 15:00:06 embed-certs-902161 crio[843]: time="2025-11-21T15:00:06.345843397Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 21 15:00:06 embed-certs-902161 crio[843]: time="2025-11-21T15:00:06.360236667Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7f5a9a837c662ba9604be721e6de5dc78189b6f29d9e8cf86f3804da072dbaf7 UID:9a929447-7041-4d25-a008-51ccf9c7f5e2 NetNS:/var/run/netns/09df3e16-dc1d-4753-85eb-e4940b2b3e2a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000460508}] Aliases:map[]}"
	Nov 21 15:00:06 embed-certs-902161 crio[843]: time="2025-11-21T15:00:06.360442175Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 21 15:00:06 embed-certs-902161 crio[843]: time="2025-11-21T15:00:06.363074173Z" level=info msg="Ran pod sandbox 7f5a9a837c662ba9604be721e6de5dc78189b6f29d9e8cf86f3804da072dbaf7 with infra container: default/busybox/POD" id=211e7936-917d-459d-82c2-18e2010e8bf3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:00:06 embed-certs-902161 crio[843]: time="2025-11-21T15:00:06.366216332Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b677cc3f-626c-4f92-9d88-d5d03813c49a name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:00:06 embed-certs-902161 crio[843]: time="2025-11-21T15:00:06.366511252Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b677cc3f-626c-4f92-9d88-d5d03813c49a name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:00:06 embed-certs-902161 crio[843]: time="2025-11-21T15:00:06.366656977Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b677cc3f-626c-4f92-9d88-d5d03813c49a name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:00:06 embed-certs-902161 crio[843]: time="2025-11-21T15:00:06.368748801Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1f8e7e41-9a9f-4b05-8185-a559ddcb3323 name=/runtime.v1.ImageService/PullImage
	Nov 21 15:00:06 embed-certs-902161 crio[843]: time="2025-11-21T15:00:06.370466917Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 15:00:08 embed-certs-902161 crio[843]: time="2025-11-21T15:00:08.506098781Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=1f8e7e41-9a9f-4b05-8185-a559ddcb3323 name=/runtime.v1.ImageService/PullImage
	Nov 21 15:00:08 embed-certs-902161 crio[843]: time="2025-11-21T15:00:08.507015215Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f376198f-f434-4951-b78d-66cf6145023e name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:00:08 embed-certs-902161 crio[843]: time="2025-11-21T15:00:08.509843293Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=51120bed-2e3b-4d3f-ad39-8f7543d333d9 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:00:08 embed-certs-902161 crio[843]: time="2025-11-21T15:00:08.515488382Z" level=info msg="Creating container: default/busybox/busybox" id=0fd060f4-b2fb-4d9b-b7bc-0709865110f6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:00:08 embed-certs-902161 crio[843]: time="2025-11-21T15:00:08.51562344Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:00:08 embed-certs-902161 crio[843]: time="2025-11-21T15:00:08.520719541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:00:08 embed-certs-902161 crio[843]: time="2025-11-21T15:00:08.52118801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:00:08 embed-certs-902161 crio[843]: time="2025-11-21T15:00:08.53603434Z" level=info msg="Created container fc5d61f25ce0809aaba0bd493b464507ec7a783a6e1d59d0b115e1908cef2a76: default/busybox/busybox" id=0fd060f4-b2fb-4d9b-b7bc-0709865110f6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:00:08 embed-certs-902161 crio[843]: time="2025-11-21T15:00:08.537119374Z" level=info msg="Starting container: fc5d61f25ce0809aaba0bd493b464507ec7a783a6e1d59d0b115e1908cef2a76" id=f2595837-c200-4586-b515-2afd1b28d097 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:00:08 embed-certs-902161 crio[843]: time="2025-11-21T15:00:08.539737818Z" level=info msg="Started container" PID=1803 containerID=fc5d61f25ce0809aaba0bd493b464507ec7a783a6e1d59d0b115e1908cef2a76 description=default/busybox/busybox id=f2595837-c200-4586-b515-2afd1b28d097 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f5a9a837c662ba9604be721e6de5dc78189b6f29d9e8cf86f3804da072dbaf7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	fc5d61f25ce08       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   7f5a9a837c662       busybox                                      default
	459c52574f029       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   5592d449e6fd9       coredns-66bc5c9577-gttll                     kube-system
	bc8bb37fe371c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   95602bdec4738       storage-provisioner                          kube-system
	a18e3f5d5c3fa       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   dd42747abe00c       kindnet-9zs98                                kube-system
	715a2b1bf0b89       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      56 seconds ago       Running             kube-proxy                0                   abbe4ca15bee6       kube-proxy-wkbb9                             kube-system
	053d5b2d311ec       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   db0452fbbf357       kube-scheduler-embed-certs-902161            kube-system
	a93059615d9aa       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   d095338bb7c86       kube-controller-manager-embed-certs-902161   kube-system
	eec12eea5ee89       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   12d3e63842418       kube-apiserver-embed-certs-902161            kube-system
	9a84fc8be0301       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   5d4722e7b390d       etcd-embed-certs-902161                      kube-system
	
	
	==> coredns [459c52574f0296038cc3abbea68f28b48e40ddf9ba82033cb2a71cfa4be0c653] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46308 - 55124 "HINFO IN 1218712161348806969.2072076919548879363. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015178658s
	
	
	==> describe nodes <==
	Name:               embed-certs-902161
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-902161
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=embed-certs-902161
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_59_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:59:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-902161
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 15:00:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 15:00:15 +0000   Fri, 21 Nov 2025 14:59:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 15:00:15 +0000   Fri, 21 Nov 2025 14:59:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 15:00:15 +0000   Fri, 21 Nov 2025 14:59:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 15:00:15 +0000   Fri, 21 Nov 2025 15:00:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-902161
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                889cdbbe-ffd5-4f2f-86b7-0117a83246d8
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-gttll                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-902161                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-9zs98                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-902161             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-902161    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-wkbb9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-902161             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node embed-certs-902161 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node embed-certs-902161 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x8 over 73s)  kubelet          Node embed-certs-902161 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node embed-certs-902161 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node embed-certs-902161 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node embed-certs-902161 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node embed-certs-902161 event: Registered Node embed-certs-902161 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-902161 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 14:34] overlayfs: idmapped layers are currently not supported
	[Nov21 14:35] overlayfs: idmapped layers are currently not supported
	[Nov21 14:36] overlayfs: idmapped layers are currently not supported
	[Nov21 14:37] overlayfs: idmapped layers are currently not supported
	[Nov21 14:39] overlayfs: idmapped layers are currently not supported
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	[Nov21 14:58] overlayfs: idmapped layers are currently not supported
	[Nov21 14:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9a84fc8be030153cdbd8758d77dc16cfc07cd5ea829fda88e3e030e2c84b19d8] <==
	{"level":"warn","ts":"2025-11-21T14:59:07.130386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.146078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.163831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.182477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.206375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.289549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.311321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.329790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.405487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.425597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.475740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.548565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.616434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.688204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.710991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.823966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.889756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.940230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:07.987481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:08.034287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:08.118016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:08.203305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:08.238323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:08.284504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:59:08.524378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39154","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:00:16 up  2:42,  0 user,  load average: 3.10, 3.01, 2.57
	Linux embed-certs-902161 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a18e3f5d5c3fa868fd98638081238f4c67b61e40a1d259d30b36386dea3e35c7] <==
	I1121 14:59:20.809830       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:59:20.810108       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 14:59:20.810227       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:59:20.810237       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:59:20.810247       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:59:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:59:21.057102       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:59:21.059405       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:59:21.059456       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:59:21.068476       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:59:51.057684       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 14:59:51.060193       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 14:59:51.060193       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 14:59:51.060317       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1121 14:59:52.259922       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:59:52.259950       1 metrics.go:72] Registering metrics
	I1121 14:59:52.260003       1 controller.go:711] "Syncing nftables rules"
	I1121 15:00:01.057276       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 15:00:01.057411       1 main.go:301] handling current node
	I1121 15:00:11.057673       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 15:00:11.057717       1 main.go:301] handling current node
	
	
	==> kube-apiserver [eec12eea5ee8950189ce4c8c0990c6f698611b6a8d388119183ecf24edbb02fb] <==
	I1121 14:59:10.730042       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 14:59:10.731738       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 14:59:10.732638       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:59:10.803792       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:59:10.841924       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:59:10.842044       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:59:10.926854       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:59:10.934329       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:59:11.263814       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:59:11.322857       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:59:11.334081       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:59:12.498860       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:59:12.651091       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:59:12.752551       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:59:12.760742       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1121 14:59:12.761871       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:59:12.769253       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:59:13.562300       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:59:13.968827       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:59:14.010841       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:59:14.038548       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:59:19.247091       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:59:19.768139       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:59:19.790556       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:59:20.054103       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [a93059615d9aa488421b4e9a5fcabe794d32c5158b8939e648d8b35e44ae3681] <==
	I1121 14:59:18.603871       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 14:59:18.604040       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:59:18.604112       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:59:18.604175       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:59:18.604461       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 14:59:18.604475       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:59:18.604485       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:59:18.604500       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:59:18.612735       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:59:18.616989       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:59:18.628915       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-902161" podCIDRs=["10.244.0.0/24"]
	I1121 14:59:18.629064       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:59:18.635458       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:59:18.635577       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:59:18.635945       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:59:18.638502       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:59:18.638819       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 14:59:18.638948       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 14:59:18.639757       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-902161"
	I1121 14:59:18.640921       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 14:59:18.648477       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:59:18.650804       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:59:18.651801       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:59:18.669190       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:00:03.648976       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [715a2b1bf0b89bc1d0a1e14dd5ed5c5b12dc0dd091ada71b5a5916410ba96da1] <==
	I1121 14:59:20.736731       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:59:20.924377       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:59:21.025187       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:59:21.025222       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1121 14:59:21.025287       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:59:21.144904       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:59:21.144956       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:59:21.157219       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:59:21.157570       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:59:21.157594       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:59:21.158864       1 config.go:200] "Starting service config controller"
	I1121 14:59:21.158952       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:59:21.159194       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:59:21.159237       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:59:21.159282       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:59:21.159309       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:59:21.160055       1 config.go:309] "Starting node config controller"
	I1121 14:59:21.162471       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:59:21.162552       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:59:21.272526       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:59:21.272564       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:59:21.272643       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [053d5b2d311ec58d22c63de9e91f076c87cfd02452429778fd446c41244656f8] <==
	E1121 14:59:10.853055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:59:10.853131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:59:10.853190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:59:10.853238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:59:10.862600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:59:10.862747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:59:10.862867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:59:10.862961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:59:10.863109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:59:10.863190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:59:11.723284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:59:11.724602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:59:11.740683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:59:11.827020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:59:11.862122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:59:11.876324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:59:11.965687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:59:12.029573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:59:12.043534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:59:12.044629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:59:12.059327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:59:12.059482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 14:59:12.065450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:59:12.180009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1121 14:59:14.586897       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:59:19 embed-certs-902161 kubelet[1310]: I1121 14:59:19.374474    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a59095a4-c10e-4739-809b-fa5606b9b835-xtables-lock\") pod \"kube-proxy-wkbb9\" (UID: \"a59095a4-c10e-4739-809b-fa5606b9b835\") " pod="kube-system/kube-proxy-wkbb9"
	Nov 21 14:59:19 embed-certs-902161 kubelet[1310]: I1121 14:59:19.374545    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4f7aaa72-4c04-42c6-b6c3-363eef49e44f-cni-cfg\") pod \"kindnet-9zs98\" (UID: \"4f7aaa72-4c04-42c6-b6c3-363eef49e44f\") " pod="kube-system/kindnet-9zs98"
	Nov 21 14:59:19 embed-certs-902161 kubelet[1310]: I1121 14:59:19.374579    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f7aaa72-4c04-42c6-b6c3-363eef49e44f-xtables-lock\") pod \"kindnet-9zs98\" (UID: \"4f7aaa72-4c04-42c6-b6c3-363eef49e44f\") " pod="kube-system/kindnet-9zs98"
	Nov 21 14:59:19 embed-certs-902161 kubelet[1310]: I1121 14:59:19.374600    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a59095a4-c10e-4739-809b-fa5606b9b835-lib-modules\") pod \"kube-proxy-wkbb9\" (UID: \"a59095a4-c10e-4739-809b-fa5606b9b835\") " pod="kube-system/kube-proxy-wkbb9"
	Nov 21 14:59:19 embed-certs-902161 kubelet[1310]: I1121 14:59:19.374667    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjbbg\" (UniqueName: \"kubernetes.io/projected/a59095a4-c10e-4739-809b-fa5606b9b835-kube-api-access-wjbbg\") pod \"kube-proxy-wkbb9\" (UID: \"a59095a4-c10e-4739-809b-fa5606b9b835\") " pod="kube-system/kube-proxy-wkbb9"
	Nov 21 14:59:19 embed-certs-902161 kubelet[1310]: E1121 14:59:19.647049    1310 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 21 14:59:19 embed-certs-902161 kubelet[1310]: E1121 14:59:19.647089    1310 projected.go:196] Error preparing data for projected volume kube-api-access-f99rk for pod kube-system/kindnet-9zs98: configmap "kube-root-ca.crt" not found
	Nov 21 14:59:19 embed-certs-902161 kubelet[1310]: E1121 14:59:19.647306    1310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f7aaa72-4c04-42c6-b6c3-363eef49e44f-kube-api-access-f99rk podName:4f7aaa72-4c04-42c6-b6c3-363eef49e44f nodeName:}" failed. No retries permitted until 2025-11-21 14:59:20.147254576 +0000 UTC m=+6.311426929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f99rk" (UniqueName: "kubernetes.io/projected/4f7aaa72-4c04-42c6-b6c3-363eef49e44f-kube-api-access-f99rk") pod "kindnet-9zs98" (UID: "4f7aaa72-4c04-42c6-b6c3-363eef49e44f") : configmap "kube-root-ca.crt" not found
	Nov 21 14:59:19 embed-certs-902161 kubelet[1310]: E1121 14:59:19.647903    1310 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 21 14:59:19 embed-certs-902161 kubelet[1310]: E1121 14:59:19.647945    1310 projected.go:196] Error preparing data for projected volume kube-api-access-wjbbg for pod kube-system/kube-proxy-wkbb9: configmap "kube-root-ca.crt" not found
	Nov 21 14:59:19 embed-certs-902161 kubelet[1310]: E1121 14:59:19.647995    1310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a59095a4-c10e-4739-809b-fa5606b9b835-kube-api-access-wjbbg podName:a59095a4-c10e-4739-809b-fa5606b9b835 nodeName:}" failed. No retries permitted until 2025-11-21 14:59:20.14798096 +0000 UTC m=+6.312153313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wjbbg" (UniqueName: "kubernetes.io/projected/a59095a4-c10e-4739-809b-fa5606b9b835-kube-api-access-wjbbg") pod "kube-proxy-wkbb9" (UID: "a59095a4-c10e-4739-809b-fa5606b9b835") : configmap "kube-root-ca.crt" not found
	Nov 21 14:59:20 embed-certs-902161 kubelet[1310]: I1121 14:59:20.185824    1310 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 21 14:59:20 embed-certs-902161 kubelet[1310]: W1121 14:59:20.532839    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/crio-dd42747abe00cf5b0efffd1635aec450ae84e8bbbf9baf84c2f1025c9b92e513 WatchSource:0}: Error finding container dd42747abe00cf5b0efffd1635aec450ae84e8bbbf9baf84c2f1025c9b92e513: Status 404 returned error can't find the container with id dd42747abe00cf5b0efffd1635aec450ae84e8bbbf9baf84c2f1025c9b92e513
	Nov 21 14:59:21 embed-certs-902161 kubelet[1310]: I1121 14:59:21.379416    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9zs98" podStartSLOduration=2.379398257 podStartE2EDuration="2.379398257s" podCreationTimestamp="2025-11-21 14:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:59:21.342233843 +0000 UTC m=+7.506406196" watchObservedRunningTime="2025-11-21 14:59:21.379398257 +0000 UTC m=+7.543570618"
	Nov 21 14:59:22 embed-certs-902161 kubelet[1310]: I1121 14:59:22.971159    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wkbb9" podStartSLOduration=3.97113917 podStartE2EDuration="3.97113917s" podCreationTimestamp="2025-11-21 14:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:59:21.380944023 +0000 UTC m=+7.545116376" watchObservedRunningTime="2025-11-21 14:59:22.97113917 +0000 UTC m=+9.135311523"
	Nov 21 15:00:01 embed-certs-902161 kubelet[1310]: I1121 15:00:01.357547    1310 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 15:00:01 embed-certs-902161 kubelet[1310]: I1121 15:00:01.650175    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/90f25b5f-e180-47de-830a-c9fd43709936-tmp\") pod \"storage-provisioner\" (UID: \"90f25b5f-e180-47de-830a-c9fd43709936\") " pod="kube-system/storage-provisioner"
	Nov 21 15:00:01 embed-certs-902161 kubelet[1310]: I1121 15:00:01.650250    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrgm\" (UniqueName: \"kubernetes.io/projected/90f25b5f-e180-47de-830a-c9fd43709936-kube-api-access-bjrgm\") pod \"storage-provisioner\" (UID: \"90f25b5f-e180-47de-830a-c9fd43709936\") " pod="kube-system/storage-provisioner"
	Nov 21 15:00:01 embed-certs-902161 kubelet[1310]: I1121 15:00:01.760675    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l668\" (UniqueName: \"kubernetes.io/projected/3a4724fc-20fc-4b84-86b5-c3e0255a8563-kube-api-access-7l668\") pod \"coredns-66bc5c9577-gttll\" (UID: \"3a4724fc-20fc-4b84-86b5-c3e0255a8563\") " pod="kube-system/coredns-66bc5c9577-gttll"
	Nov 21 15:00:01 embed-certs-902161 kubelet[1310]: I1121 15:00:01.760768    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a4724fc-20fc-4b84-86b5-c3e0255a8563-config-volume\") pod \"coredns-66bc5c9577-gttll\" (UID: \"3a4724fc-20fc-4b84-86b5-c3e0255a8563\") " pod="kube-system/coredns-66bc5c9577-gttll"
	Nov 21 15:00:01 embed-certs-902161 kubelet[1310]: W1121 15:00:01.947357    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/crio-95602bdec4738ddf3e06025c095584f278a626bfa31751b781da6cea9a38fe2d WatchSource:0}: Error finding container 95602bdec4738ddf3e06025c095584f278a626bfa31751b781da6cea9a38fe2d: Status 404 returned error can't find the container with id 95602bdec4738ddf3e06025c095584f278a626bfa31751b781da6cea9a38fe2d
	Nov 21 15:00:02 embed-certs-902161 kubelet[1310]: W1121 15:00:02.368048    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/crio-5592d449e6fd95e8bf26a85be8aeeb3f22264c1c96c49a545755de0d20c48829 WatchSource:0}: Error finding container 5592d449e6fd95e8bf26a85be8aeeb3f22264c1c96c49a545755de0d20c48829: Status 404 returned error can't find the container with id 5592d449e6fd95e8bf26a85be8aeeb3f22264c1c96c49a545755de0d20c48829
	Nov 21 15:00:03 embed-certs-902161 kubelet[1310]: I1121 15:00:03.533498    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.533476536 podStartE2EDuration="42.533476536s" podCreationTimestamp="2025-11-21 14:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:00:02.56296643 +0000 UTC m=+48.727138791" watchObservedRunningTime="2025-11-21 15:00:03.533476536 +0000 UTC m=+49.697648889"
	Nov 21 15:00:03 embed-certs-902161 kubelet[1310]: I1121 15:00:03.533601    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gttll" podStartSLOduration=43.533595545 podStartE2EDuration="43.533595545s" podCreationTimestamp="2025-11-21 14:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:00:03.521965343 +0000 UTC m=+49.686137737" watchObservedRunningTime="2025-11-21 15:00:03.533595545 +0000 UTC m=+49.697767914"
	Nov 21 15:00:06 embed-certs-902161 kubelet[1310]: I1121 15:00:06.135807    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc54v\" (UniqueName: \"kubernetes.io/projected/9a929447-7041-4d25-a008-51ccf9c7f5e2-kube-api-access-hc54v\") pod \"busybox\" (UID: \"9a929447-7041-4d25-a008-51ccf9c7f5e2\") " pod="default/busybox"
	
	
	==> storage-provisioner [bc8bb37fe371c1ad105a1b37c4f5866ad6aa6fc4b57c4b03bb3c3d79a8ebf48d] <==
	I1121 15:00:02.254588       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 15:00:02.346705       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 15:00:02.346861       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 15:00:02.366047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:02.412567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:00:02.412747       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 15:00:02.428374       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-902161_7dd5c1b3-8ecb-44c4-b024-6c3fe939fcbe!
	I1121 15:00:02.428528       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ebbc0454-23a8-4831-a020-9201f95f5437", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-902161_7dd5c1b3-8ecb-44c4-b024-6c3fe939fcbe became leader
	W1121 15:00:02.477022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:02.544428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:00:02.639429       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-902161_7dd5c1b3-8ecb-44c4-b024-6c3fe939fcbe!
	W1121 15:00:04.548624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:04.554094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:06.558025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:06.562825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:08.566574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:08.571529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:10.574302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:10.579751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:12.583476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:12.591169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:14.594420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:14.599155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:16.603066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:00:16.608566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-902161 -n embed-certs-902161
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-902161 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-844780 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-844780 --alsologtostderr -v=1: exit status 80 (1.873878879s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-844780 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 15:01:24.949985  487751 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:01:24.950128  487751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:01:24.950140  487751 out.go:374] Setting ErrFile to fd 2...
	I1121 15:01:24.950159  487751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:01:24.950550  487751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:01:24.950932  487751 out.go:368] Setting JSON to false
	I1121 15:01:24.950962  487751 mustload.go:66] Loading cluster: no-preload-844780
	I1121 15:01:24.951770  487751 config.go:182] Loaded profile config "no-preload-844780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:01:24.952316  487751 cli_runner.go:164] Run: docker container inspect no-preload-844780 --format={{.State.Status}}
	I1121 15:01:24.969724  487751 host.go:66] Checking if "no-preload-844780" exists ...
	I1121 15:01:24.970062  487751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:01:25.043301  487751 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 15:01:25.033225462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:01:25.043992  487751 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-844780 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1121 15:01:25.047346  487751 out.go:179] * Pausing node no-preload-844780 ... 
	I1121 15:01:25.051953  487751 host.go:66] Checking if "no-preload-844780" exists ...
	I1121 15:01:25.052310  487751 ssh_runner.go:195] Run: systemctl --version
	I1121 15:01:25.052362  487751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-844780
	I1121 15:01:25.070318  487751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/no-preload-844780/id_rsa Username:docker}
	I1121 15:01:25.171886  487751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:01:25.194334  487751 pause.go:52] kubelet running: true
	I1121 15:01:25.194443  487751 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 15:01:25.410118  487751 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 15:01:25.410279  487751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 15:01:25.486096  487751 cri.go:89] found id: "8e46ba8bffec92e3a6028389dbfdba8e09d0cd4bba5e18e2bdb7c932bdc655ad"
	I1121 15:01:25.486173  487751 cri.go:89] found id: "91214642a2e4d239b7aa08e3a3850f1413ec65232ed5fed18be3647fe444771c"
	I1121 15:01:25.486194  487751 cri.go:89] found id: "092aa66ca872f777a9f7bb1165f836461731b4ead738841303293cb5d0367e17"
	I1121 15:01:25.486212  487751 cri.go:89] found id: "25bca3969133f7a61c59e66d79a502421490cba2b74f6e2402d9413554a4e50c"
	I1121 15:01:25.486229  487751 cri.go:89] found id: "ae6f69c7f5749ac30e241e7543f9fb184fc24b729c4951ad03bbd80c5b5f834e"
	I1121 15:01:25.486260  487751 cri.go:89] found id: "f28d6ebe45c6536743a007a0f7945e8b16f3fabfc7bfef52a4f2c46fd0f649b8"
	I1121 15:01:25.486278  487751 cri.go:89] found id: "4de5050f30939399107806067ae7c00df56197c6f825d9cbdd2926418c8dfb1c"
	I1121 15:01:25.486295  487751 cri.go:89] found id: "8ef8bdf61c8fb8bd85b829ec545c408d5a4aedca375c5a8696c35c65e8c4bb35"
	I1121 15:01:25.486312  487751 cri.go:89] found id: "b93ce5f43f1f5d952f34aacec278e0b6e010e0f24f89fe665a79bb363f7b369e"
	I1121 15:01:25.486339  487751 cri.go:89] found id: "750148acce1d8a59f9fb4ba7b5e591406908b5ebfce4337f15a074d55153c1ee"
	I1121 15:01:25.486358  487751 cri.go:89] found id: "803b6f25723ab454823682f0924b288c077286398a7e98198fd3b7bd5f286fc6"
	I1121 15:01:25.486376  487751 cri.go:89] found id: ""
	I1121 15:01:25.486454  487751 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 15:01:25.506221  487751 retry.go:31] will retry after 176.393577ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:01:25Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:01:25.683658  487751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:01:25.697617  487751 pause.go:52] kubelet running: false
	I1121 15:01:25.697712  487751 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 15:01:25.883767  487751 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 15:01:25.883928  487751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 15:01:25.955515  487751 cri.go:89] found id: "8e46ba8bffec92e3a6028389dbfdba8e09d0cd4bba5e18e2bdb7c932bdc655ad"
	I1121 15:01:25.955581  487751 cri.go:89] found id: "91214642a2e4d239b7aa08e3a3850f1413ec65232ed5fed18be3647fe444771c"
	I1121 15:01:25.955599  487751 cri.go:89] found id: "092aa66ca872f777a9f7bb1165f836461731b4ead738841303293cb5d0367e17"
	I1121 15:01:25.955617  487751 cri.go:89] found id: "25bca3969133f7a61c59e66d79a502421490cba2b74f6e2402d9413554a4e50c"
	I1121 15:01:25.955636  487751 cri.go:89] found id: "ae6f69c7f5749ac30e241e7543f9fb184fc24b729c4951ad03bbd80c5b5f834e"
	I1121 15:01:25.955663  487751 cri.go:89] found id: "f28d6ebe45c6536743a007a0f7945e8b16f3fabfc7bfef52a4f2c46fd0f649b8"
	I1121 15:01:25.955685  487751 cri.go:89] found id: "4de5050f30939399107806067ae7c00df56197c6f825d9cbdd2926418c8dfb1c"
	I1121 15:01:25.955702  487751 cri.go:89] found id: "8ef8bdf61c8fb8bd85b829ec545c408d5a4aedca375c5a8696c35c65e8c4bb35"
	I1121 15:01:25.955719  487751 cri.go:89] found id: "b93ce5f43f1f5d952f34aacec278e0b6e010e0f24f89fe665a79bb363f7b369e"
	I1121 15:01:25.955739  487751 cri.go:89] found id: "750148acce1d8a59f9fb4ba7b5e591406908b5ebfce4337f15a074d55153c1ee"
	I1121 15:01:25.955764  487751 cri.go:89] found id: "803b6f25723ab454823682f0924b288c077286398a7e98198fd3b7bd5f286fc6"
	I1121 15:01:25.955785  487751 cri.go:89] found id: ""
	I1121 15:01:25.955877  487751 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 15:01:25.967398  487751 retry.go:31] will retry after 511.416684ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:01:25Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:01:26.479077  487751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:01:26.492633  487751 pause.go:52] kubelet running: false
	I1121 15:01:26.492756  487751 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 15:01:26.659313  487751 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 15:01:26.659396  487751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 15:01:26.733089  487751 cri.go:89] found id: "8e46ba8bffec92e3a6028389dbfdba8e09d0cd4bba5e18e2bdb7c932bdc655ad"
	I1121 15:01:26.733111  487751 cri.go:89] found id: "91214642a2e4d239b7aa08e3a3850f1413ec65232ed5fed18be3647fe444771c"
	I1121 15:01:26.733117  487751 cri.go:89] found id: "092aa66ca872f777a9f7bb1165f836461731b4ead738841303293cb5d0367e17"
	I1121 15:01:26.733120  487751 cri.go:89] found id: "25bca3969133f7a61c59e66d79a502421490cba2b74f6e2402d9413554a4e50c"
	I1121 15:01:26.733125  487751 cri.go:89] found id: "ae6f69c7f5749ac30e241e7543f9fb184fc24b729c4951ad03bbd80c5b5f834e"
	I1121 15:01:26.733128  487751 cri.go:89] found id: "f28d6ebe45c6536743a007a0f7945e8b16f3fabfc7bfef52a4f2c46fd0f649b8"
	I1121 15:01:26.733132  487751 cri.go:89] found id: "4de5050f30939399107806067ae7c00df56197c6f825d9cbdd2926418c8dfb1c"
	I1121 15:01:26.733136  487751 cri.go:89] found id: "8ef8bdf61c8fb8bd85b829ec545c408d5a4aedca375c5a8696c35c65e8c4bb35"
	I1121 15:01:26.733139  487751 cri.go:89] found id: "b93ce5f43f1f5d952f34aacec278e0b6e010e0f24f89fe665a79bb363f7b369e"
	I1121 15:01:26.733179  487751 cri.go:89] found id: "750148acce1d8a59f9fb4ba7b5e591406908b5ebfce4337f15a074d55153c1ee"
	I1121 15:01:26.733183  487751 cri.go:89] found id: "803b6f25723ab454823682f0924b288c077286398a7e98198fd3b7bd5f286fc6"
	I1121 15:01:26.733192  487751 cri.go:89] found id: ""
	I1121 15:01:26.733261  487751 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 15:01:26.749096  487751 out.go:203] 
	W1121 15:01:26.751999  487751 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:01:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:01:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 15:01:26.752022  487751 out.go:285] * 
	* 
	W1121 15:01:26.757682  487751 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 15:01:26.760861  487751 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-844780 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-844780
helpers_test.go:243: (dbg) docker inspect no-preload-844780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460",
	        "Created": "2025-11-21T14:58:39.813840429Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 483334,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T15:00:18.47976798Z",
	            "FinishedAt": "2025-11-21T15:00:17.329526483Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/hosts",
	        "LogPath": "/var/lib/docker/containers/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460-json.log",
	        "Name": "/no-preload-844780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-844780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-844780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460",
	                "LowerDir": "/var/lib/docker/overlay2/30aebe0b3ca4716483bf95fa926217cb813474aa3eaf00d1a3a2b419e8a46c7b-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30aebe0b3ca4716483bf95fa926217cb813474aa3eaf00d1a3a2b419e8a46c7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30aebe0b3ca4716483bf95fa926217cb813474aa3eaf00d1a3a2b419e8a46c7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30aebe0b3ca4716483bf95fa926217cb813474aa3eaf00d1a3a2b419e8a46c7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-844780",
	                "Source": "/var/lib/docker/volumes/no-preload-844780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-844780",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-844780",
	                "name.minikube.sigs.k8s.io": "no-preload-844780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "160be835c315be2dfb0fdf5e27dd061f73e9563d919efe89507daf8cb5996121",
	            "SandboxKey": "/var/run/docker/netns/160be835c315",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-844780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:17:d3:91:6d:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "beccd80047d00ade7f2a91d5b368d7f2498703ce72d6db7bd114ead62561b75b",
	                    "EndpointID": "b7cc625ffd7cca8046391d90e6b19fffcd7e38ea4cff15140ccfc01d3ff1b2e4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-844780",
	                        "8e592d0d77ca"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-844780 -n no-preload-844780
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-844780 -n no-preload-844780: exit status 2 (360.169459ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-844780 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-844780 logs -n 25: (1.382560891s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-605096                                                                                                                                                                                                                        │ cert-options-605096          │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-357479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │                     │
	│ stop    │ -p old-k8s-version-357479 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-357479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:57 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ image   │ old-k8s-version-357479 image list --format=json                                                                                                                                                                                               │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ pause   │ -p old-k8s-version-357479 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │                     │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p cert-expiration-304879                                                                                                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 15:00 UTC │
	│ delete  │ -p disable-driver-mounts-984933                                                                                                                                                                                                               │ disable-driver-mounts-984933 │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-844780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ stop    │ -p no-preload-844780 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ addons  │ enable metrics-server -p embed-certs-902161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-844780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ stop    │ -p embed-certs-902161 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-902161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ image   │ no-preload-844780 image list --format=json                                                                                                                                                                                                    │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p no-preload-844780 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 15:00:30
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 15:00:30.593525  484973 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:00:30.593637  484973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:00:30.593666  484973 out.go:374] Setting ErrFile to fd 2...
	I1121 15:00:30.593672  484973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:00:30.593953  484973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:00:30.596642  484973 out.go:368] Setting JSON to false
	I1121 15:00:30.597614  484973 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9782,"bootTime":1763727448,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 15:00:30.597691  484973 start.go:143] virtualization:  
	I1121 15:00:30.600675  484973 out.go:179] * [embed-certs-902161] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 15:00:30.604492  484973 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 15:00:30.604589  484973 notify.go:221] Checking for updates...
	I1121 15:00:30.610105  484973 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 15:00:30.612902  484973 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:00:30.615730  484973 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 15:00:30.618618  484973 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 15:00:30.621567  484973 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 15:00:30.624780  484973 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:00:30.625367  484973 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 15:00:30.680640  484973 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 15:00:30.680822  484973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:00:30.796967  484973 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 15:00:30.786510982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:00:30.797070  484973 docker.go:319] overlay module found
	I1121 15:00:30.800284  484973 out.go:179] * Using the docker driver based on existing profile
	I1121 15:00:30.803085  484973 start.go:309] selected driver: docker
	I1121 15:00:30.803109  484973 start.go:930] validating driver "docker" against &{Name:embed-certs-902161 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:00:30.803225  484973 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 15:00:30.803970  484973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:00:30.903176  484973 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 15:00:30.893036941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:00:30.903512  484973 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:00:30.903538  484973 cni.go:84] Creating CNI manager for ""
	I1121 15:00:30.903589  484973 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:00:30.903627  484973 start.go:353] cluster config:
	{Name:embed-certs-902161 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:00:30.908908  484973 out.go:179] * Starting "embed-certs-902161" primary control-plane node in "embed-certs-902161" cluster
	I1121 15:00:30.911778  484973 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 15:00:30.914595  484973 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 15:00:30.917417  484973 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:00:30.917464  484973 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 15:00:30.917476  484973 cache.go:65] Caching tarball of preloaded images
	I1121 15:00:30.917562  484973 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 15:00:30.917571  484973 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 15:00:30.917688  484973 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/config.json ...
	I1121 15:00:30.917914  484973 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 15:00:30.945158  484973 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 15:00:30.945177  484973 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 15:00:30.945190  484973 cache.go:243] Successfully downloaded all kic artifacts
	I1121 15:00:30.945226  484973 start.go:360] acquireMachinesLock for embed-certs-902161: {Name:mk52b2685f312e9983127cfd2341df0728e188b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 15:00:30.945281  484973 start.go:364] duration metric: took 36.128µs to acquireMachinesLock for "embed-certs-902161"
	I1121 15:00:30.945299  484973 start.go:96] Skipping create...Using existing machine configuration
	I1121 15:00:30.945305  484973 fix.go:54] fixHost starting: 
	I1121 15:00:30.945555  484973 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:00:30.974678  484973 fix.go:112] recreateIfNeeded on embed-certs-902161: state=Stopped err=<nil>
	W1121 15:00:30.974707  484973 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 15:00:32.252284  483158 node_ready.go:49] node "no-preload-844780" is "Ready"
	I1121 15:00:32.252318  483158 node_ready.go:38] duration metric: took 5.535797122s for node "no-preload-844780" to be "Ready" ...
	I1121 15:00:32.252337  483158 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:00:32.252426  483158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:00:34.031685  483158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.275164059s)
	I1121 15:00:34.031769  483158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.258242768s)
	I1121 15:00:34.114545  483158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.924423153s)
	I1121 15:00:34.114820  483158 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.862380689s)
	I1121 15:00:34.114863  483158 api_server.go:72] duration metric: took 7.774802246s to wait for apiserver process to appear ...
	I1121 15:00:34.114898  483158 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:00:34.114939  483158 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 15:00:34.117927  483158 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-844780 addons enable metrics-server
	
	I1121 15:00:34.120687  483158 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1121 15:00:30.977945  484973 out.go:252] * Restarting existing docker container for "embed-certs-902161" ...
	I1121 15:00:30.978046  484973 cli_runner.go:164] Run: docker start embed-certs-902161
	I1121 15:00:31.377148  484973 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:00:31.404630  484973 kic.go:430] container "embed-certs-902161" state is running.
	I1121 15:00:31.405045  484973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-902161
	I1121 15:00:31.432707  484973 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/config.json ...
	I1121 15:00:31.432951  484973 machine.go:94] provisionDockerMachine start ...
	I1121 15:00:31.433013  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:31.460136  484973 main.go:143] libmachine: Using SSH client type: native
	I1121 15:00:31.460486  484973 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1121 15:00:31.460496  484973 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 15:00:31.461501  484973 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 15:00:34.620031  484973 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-902161
	
	I1121 15:00:34.620056  484973 ubuntu.go:182] provisioning hostname "embed-certs-902161"
	I1121 15:00:34.620153  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:34.643909  484973 main.go:143] libmachine: Using SSH client type: native
	I1121 15:00:34.644224  484973 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1121 15:00:34.644242  484973 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-902161 && echo "embed-certs-902161" | sudo tee /etc/hostname
	I1121 15:00:34.823624  484973 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-902161
	
	I1121 15:00:34.823722  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:34.843196  484973 main.go:143] libmachine: Using SSH client type: native
	I1121 15:00:34.843512  484973 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1121 15:00:34.843539  484973 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-902161' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-902161/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-902161' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 15:00:34.990853  484973 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 15:00:34.990879  484973 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 15:00:34.990922  484973 ubuntu.go:190] setting up certificates
	I1121 15:00:34.990938  484973 provision.go:84] configureAuth start
	I1121 15:00:34.991013  484973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-902161
	I1121 15:00:35.028966  484973 provision.go:143] copyHostCerts
	I1121 15:00:35.029046  484973 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 15:00:35.029071  484973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 15:00:35.029159  484973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 15:00:35.029288  484973 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 15:00:35.029302  484973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 15:00:35.029335  484973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 15:00:35.029413  484973 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 15:00:35.029423  484973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 15:00:35.029451  484973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 15:00:35.029520  484973 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.embed-certs-902161 san=[127.0.0.1 192.168.76.2 embed-certs-902161 localhost minikube]
	I1121 15:00:35.316146  484973 provision.go:177] copyRemoteCerts
	I1121 15:00:35.316223  484973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 15:00:35.316266  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:35.337248  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:35.444866  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 15:00:35.468245  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1121 15:00:35.487779  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 15:00:35.510427  484973 provision.go:87] duration metric: took 519.468134ms to configureAuth
	I1121 15:00:35.510455  484973 ubuntu.go:206] setting minikube options for container-runtime
	I1121 15:00:35.510685  484973 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:00:35.510815  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:35.543699  484973 main.go:143] libmachine: Using SSH client type: native
	I1121 15:00:35.544004  484973 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1121 15:00:35.544018  484973 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 15:00:35.955786  484973 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 15:00:35.955861  484973 machine.go:97] duration metric: took 4.522897923s to provisionDockerMachine
	I1121 15:00:35.955887  484973 start.go:293] postStartSetup for "embed-certs-902161" (driver="docker")
	I1121 15:00:35.955912  484973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 15:00:35.956011  484973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 15:00:35.956067  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:35.980891  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:36.089496  484973 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 15:00:36.092998  484973 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 15:00:36.093032  484973 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 15:00:36.093045  484973 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 15:00:36.093107  484973 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 15:00:36.093219  484973 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 15:00:36.093326  484973 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 15:00:36.105328  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:00:36.131615  484973 start.go:296] duration metric: took 175.692953ms for postStartSetup
	I1121 15:00:36.131710  484973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 15:00:36.131764  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:36.159512  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:36.265874  484973 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 15:00:36.272999  484973 fix.go:56] duration metric: took 5.327685783s for fixHost
	I1121 15:00:36.273023  484973 start.go:83] releasing machines lock for "embed-certs-902161", held for 5.327734325s
	I1121 15:00:36.273098  484973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-902161
	I1121 15:00:36.296648  484973 ssh_runner.go:195] Run: cat /version.json
	I1121 15:00:36.296698  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:36.296713  484973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 15:00:36.296766  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:36.342540  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:36.354692  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:36.444262  484973 ssh_runner.go:195] Run: systemctl --version
	I1121 15:00:36.542925  484973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 15:00:36.595084  484973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 15:00:36.600518  484973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 15:00:36.600633  484973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 15:00:36.609885  484973 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 15:00:36.609954  484973 start.go:496] detecting cgroup driver to use...
	I1121 15:00:36.609999  484973 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 15:00:36.610073  484973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 15:00:36.635625  484973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 15:00:36.652984  484973 docker.go:218] disabling cri-docker service (if available) ...
	I1121 15:00:36.653108  484973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 15:00:36.674892  484973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 15:00:36.694370  484973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 15:00:36.847552  484973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 15:00:36.978274  484973 docker.go:234] disabling docker service ...
	I1121 15:00:36.978360  484973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 15:00:36.996673  484973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 15:00:37.013748  484973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 15:00:37.178664  484973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 15:00:37.366392  484973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 15:00:37.381058  484973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 15:00:37.398935  484973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 15:00:37.399008  484973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.409728  484973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 15:00:37.409849  484973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.419482  484973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.428858  484973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.437873  484973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 15:00:37.446537  484973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.455910  484973 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.464759  484973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.474131  484973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 15:00:37.483070  484973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 15:00:37.492442  484973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:00:37.666882  484973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 15:00:37.882061  484973 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 15:00:37.882143  484973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 15:00:37.890867  484973 start.go:564] Will wait 60s for crictl version
	I1121 15:00:37.890959  484973 ssh_runner.go:195] Run: which crictl
	I1121 15:00:37.896736  484973 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 15:00:37.959510  484973 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 15:00:37.959631  484973 ssh_runner.go:195] Run: crio --version
	I1121 15:00:38.000360  484973 ssh_runner.go:195] Run: crio --version
	I1121 15:00:38.055503  484973 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 15:00:34.123500  483158 addons.go:530] duration metric: took 7.783041646s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1121 15:00:34.123983  483158 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 15:00:34.125067  483158 api_server.go:141] control plane version: v1.34.1
	I1121 15:00:34.125096  483158 api_server.go:131] duration metric: took 10.179303ms to wait for apiserver health ...
	I1121 15:00:34.125106  483158 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:00:34.128558  483158 system_pods.go:59] 8 kube-system pods found
	I1121 15:00:34.128608  483158 system_pods.go:61] "coredns-66bc5c9577-2mqjs" [96d5956d-d71f-4509-86fe-94f9c8b6832a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:34.128618  483158 system_pods.go:61] "etcd-no-preload-844780" [17c66826-5545-4905-9ef9-a63dc8cc8fa6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:00:34.128624  483158 system_pods.go:61] "kindnet-whwj8" [66ed1cd4-bb39-4b0f-b52e-a4061329e72b] Running
	I1121 15:00:34.128634  483158 system_pods.go:61] "kube-apiserver-no-preload-844780" [b286018d-5cad-4c67-9c97-7853c5c9eef3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 15:00:34.128641  483158 system_pods.go:61] "kube-controller-manager-no-preload-844780" [0005e01e-7c78-4ee6-a294-7a321177ed07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 15:00:34.128648  483158 system_pods.go:61] "kube-proxy-2zwvg" [26e02c8a-4f48-4406-8a0c-05fc4951a8c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1121 15:00:34.128659  483158 system_pods.go:61] "kube-scheduler-no-preload-844780" [c5aa6f84-0262-4786-9ba4-b0149e3bc8bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:00:34.128667  483158 system_pods.go:61] "storage-provisioner" [01c5a82c-94b5-42d1-8159-096f9fdca84a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:00:34.128675  483158 system_pods.go:74] duration metric: took 3.563858ms to wait for pod list to return data ...
	I1121 15:00:34.128685  483158 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:00:34.131699  483158 default_sa.go:45] found service account: "default"
	I1121 15:00:34.131728  483158 default_sa.go:55] duration metric: took 3.031757ms for default service account to be created ...
	I1121 15:00:34.131737  483158 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 15:00:34.139170  483158 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:34.139215  483158 system_pods.go:89] "coredns-66bc5c9577-2mqjs" [96d5956d-d71f-4509-86fe-94f9c8b6832a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:34.139226  483158 system_pods.go:89] "etcd-no-preload-844780" [17c66826-5545-4905-9ef9-a63dc8cc8fa6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:00:34.139234  483158 system_pods.go:89] "kindnet-whwj8" [66ed1cd4-bb39-4b0f-b52e-a4061329e72b] Running
	I1121 15:00:34.139241  483158 system_pods.go:89] "kube-apiserver-no-preload-844780" [b286018d-5cad-4c67-9c97-7853c5c9eef3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 15:00:34.139251  483158 system_pods.go:89] "kube-controller-manager-no-preload-844780" [0005e01e-7c78-4ee6-a294-7a321177ed07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 15:00:34.139262  483158 system_pods.go:89] "kube-proxy-2zwvg" [26e02c8a-4f48-4406-8a0c-05fc4951a8c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1121 15:00:34.139268  483158 system_pods.go:89] "kube-scheduler-no-preload-844780" [c5aa6f84-0262-4786-9ba4-b0149e3bc8bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:00:34.139285  483158 system_pods.go:89] "storage-provisioner" [01c5a82c-94b5-42d1-8159-096f9fdca84a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:00:34.139292  483158 system_pods.go:126] duration metric: took 7.550136ms to wait for k8s-apps to be running ...
	I1121 15:00:34.139301  483158 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 15:00:34.139382  483158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:00:34.155237  483158 system_svc.go:56] duration metric: took 15.911277ms WaitForService to wait for kubelet
	I1121 15:00:34.155267  483158 kubeadm.go:587] duration metric: took 7.815205635s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:00:34.155289  483158 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:00:34.158840  483158 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:00:34.158877  483158 node_conditions.go:123] node cpu capacity is 2
	I1121 15:00:34.158905  483158 node_conditions.go:105] duration metric: took 3.588063ms to run NodePressure ...
	I1121 15:00:34.158919  483158 start.go:242] waiting for startup goroutines ...
	I1121 15:00:34.158930  483158 start.go:247] waiting for cluster config update ...
	I1121 15:00:34.158948  483158 start.go:256] writing updated cluster config ...
	I1121 15:00:34.159299  483158 ssh_runner.go:195] Run: rm -f paused
	I1121 15:00:34.164822  483158 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:00:34.176094  483158 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2mqjs" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 15:00:36.203845  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	I1121 15:00:38.058543  484973 cli_runner.go:164] Run: docker network inspect embed-certs-902161 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 15:00:38.077698  484973 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 15:00:38.082151  484973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:00:38.097170  484973 kubeadm.go:884] updating cluster {Name:embed-certs-902161 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 15:00:38.097296  484973 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:00:38.097350  484973 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:00:38.153556  484973 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:00:38.153576  484973 crio.go:433] Images already preloaded, skipping extraction
	I1121 15:00:38.153636  484973 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:00:38.198676  484973 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:00:38.198760  484973 cache_images.go:86] Images are preloaded, skipping loading
	I1121 15:00:38.198782  484973 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1121 15:00:38.198936  484973 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-902161 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 15:00:38.199064  484973 ssh_runner.go:195] Run: crio config
	I1121 15:00:38.309235  484973 cni.go:84] Creating CNI manager for ""
	I1121 15:00:38.309315  484973 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:00:38.309347  484973 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 15:00:38.309485  484973 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-902161 NodeName:embed-certs-902161 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 15:00:38.309669  484973 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-902161"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 15:00:38.309789  484973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 15:00:38.320672  484973 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 15:00:38.320798  484973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 15:00:38.331044  484973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1121 15:00:38.353096  484973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 15:00:38.370306  484973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1121 15:00:38.391480  484973 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 15:00:38.395512  484973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:00:38.410553  484973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:00:38.587367  484973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:00:38.608500  484973 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161 for IP: 192.168.76.2
	I1121 15:00:38.608576  484973 certs.go:195] generating shared ca certs ...
	I1121 15:00:38.608606  484973 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:00:38.608869  484973 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 15:00:38.608981  484973 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 15:00:38.609016  484973 certs.go:257] generating profile certs ...
	I1121 15:00:38.609147  484973 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.key
	I1121 15:00:38.609255  484973 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key.5d5840b9
	I1121 15:00:38.609336  484973 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.key
	I1121 15:00:38.609485  484973 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 15:00:38.609548  484973 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 15:00:38.609570  484973 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 15:00:38.609628  484973 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 15:00:38.609695  484973 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 15:00:38.609738  484973 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 15:00:38.609816  484973 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:00:38.610719  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 15:00:38.664437  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 15:00:38.714334  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 15:00:38.756929  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 15:00:38.808013  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1121 15:00:38.859334  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 15:00:38.933677  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 15:00:38.966765  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 15:00:38.991635  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 15:00:39.023107  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 15:00:39.075863  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 15:00:39.100797  484973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 15:00:39.116371  484973 ssh_runner.go:195] Run: openssl version
	I1121 15:00:39.123631  484973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 15:00:39.134301  484973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:00:39.138769  484973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:00:39.138907  484973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:00:39.198381  484973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 15:00:39.207670  484973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 15:00:39.217299  484973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 15:00:39.222003  484973 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 15:00:39.222122  484973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 15:00:39.267076  484973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 15:00:39.277390  484973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 15:00:39.288058  484973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 15:00:39.293015  484973 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 15:00:39.293172  484973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 15:00:39.338201  484973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 15:00:39.349541  484973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 15:00:39.354231  484973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 15:00:39.398745  484973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 15:00:39.441522  484973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 15:00:39.539555  484973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 15:00:39.644511  484973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 15:00:39.747578  484973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 15:00:39.863816  484973 kubeadm.go:401] StartCluster: {Name:embed-certs-902161 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:00:39.863982  484973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 15:00:39.864098  484973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 15:00:39.975142  484973 cri.go:89] found id: "293c832724412d175c6e8ec646f8f5a753d6137d6354da90fcdc7748544c0176"
	I1121 15:00:39.975217  484973 cri.go:89] found id: "7f311233a0597f06fd619eca3d2076efd29a59099af3f91b2e7ad174953bec43"
	I1121 15:00:39.975236  484973 cri.go:89] found id: "f8022ed115d2dc50106d1d8099fe151f9220a20d78fad121bb27fe4d5d278763"
	I1121 15:00:39.975256  484973 cri.go:89] found id: "0040362a6ed65610771d229f1c844dc6fd8551a599ac1712dfac5b502944fa4e"
	I1121 15:00:39.975287  484973 cri.go:89] found id: ""
	I1121 15:00:39.975379  484973 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 15:00:39.998445  484973 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:00:39Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:00:39.998617  484973 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 15:00:40.014656  484973 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 15:00:40.014755  484973 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 15:00:40.014852  484973 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 15:00:40.027985  484973 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 15:00:40.028835  484973 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-902161" does not appear in /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:00:40.029223  484973 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-289204/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-902161" cluster setting kubeconfig missing "embed-certs-902161" context setting]
	I1121 15:00:40.030214  484973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:00:40.032248  484973 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 15:00:40.045665  484973 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1121 15:00:40.045763  484973 kubeadm.go:602] duration metric: took 30.984595ms to restartPrimaryControlPlane
	I1121 15:00:40.045789  484973 kubeadm.go:403] duration metric: took 181.984825ms to StartCluster
	I1121 15:00:40.045846  484973 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:00:40.045962  484973 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:00:40.047528  484973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:00:40.048053  484973 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:00:40.048586  484973 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 15:00:40.048683  484973 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-902161"
	I1121 15:00:40.048700  484973 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-902161"
	W1121 15:00:40.048707  484973 addons.go:248] addon storage-provisioner should already be in state true
	I1121 15:00:40.048733  484973 host.go:66] Checking if "embed-certs-902161" exists ...
	I1121 15:00:40.049600  484973 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:00:40.050028  484973 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:00:40.050164  484973 addons.go:70] Setting dashboard=true in profile "embed-certs-902161"
	I1121 15:00:40.050200  484973 addons.go:239] Setting addon dashboard=true in "embed-certs-902161"
	W1121 15:00:40.050238  484973 addons.go:248] addon dashboard should already be in state true
	I1121 15:00:40.050285  484973 host.go:66] Checking if "embed-certs-902161" exists ...
	I1121 15:00:40.050832  484973 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:00:40.059456  484973 addons.go:70] Setting default-storageclass=true in profile "embed-certs-902161"
	I1121 15:00:40.059749  484973 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-902161"
	I1121 15:00:40.059616  484973 out.go:179] * Verifying Kubernetes components...
	I1121 15:00:40.064163  484973 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:00:40.067623  484973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:00:40.110623  484973 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 15:00:40.115435  484973 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 15:00:40.122830  484973 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:00:40.122856  484973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 15:00:40.122937  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:40.123100  484973 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1121 15:00:40.126581  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 15:00:40.126610  484973 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 15:00:40.126699  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:40.147762  484973 addons.go:239] Setting addon default-storageclass=true in "embed-certs-902161"
	W1121 15:00:40.147794  484973 addons.go:248] addon default-storageclass should already be in state true
	I1121 15:00:40.147821  484973 host.go:66] Checking if "embed-certs-902161" exists ...
	I1121 15:00:40.148289  484973 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:00:40.170738  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:40.198761  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:40.218417  484973 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 15:00:40.218441  484973 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 15:00:40.218515  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:40.245286  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:40.528964  484973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1121 15:00:38.682847  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:40.685751  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:42.687604  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	I1121 15:00:40.594855  484973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 15:00:40.611173  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 15:00:40.611200  484973 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 15:00:40.645401  484973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:00:40.791048  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 15:00:40.791074  484973 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 15:00:40.861670  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 15:00:40.861697  484973 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 15:00:40.965180  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 15:00:40.965204  484973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 15:00:41.021800  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 15:00:41.021829  484973 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 15:00:41.043585  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 15:00:41.043609  484973 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 15:00:41.081321  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 15:00:41.081347  484973 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 15:00:41.125316  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 15:00:41.125342  484973 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 15:00:41.151603  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 15:00:41.151631  484973 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 15:00:41.197651  484973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1121 15:00:45.295303  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:47.681044  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	I1121 15:00:50.006966  484973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.412073858s)
	I1121 15:00:50.007022  484973 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.361583532s)
	I1121 15:00:50.007188  484973 node_ready.go:35] waiting up to 6m0s for node "embed-certs-902161" to be "Ready" ...
	I1121 15:00:50.007549  484973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.478546805s)
	I1121 15:00:50.063340  484973 node_ready.go:49] node "embed-certs-902161" is "Ready"
	I1121 15:00:50.063424  484973 node_ready.go:38] duration metric: took 56.206848ms for node "embed-certs-902161" to be "Ready" ...
	I1121 15:00:50.063543  484973 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:00:50.063641  484973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:00:50.189829  484973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.992132215s)
	I1121 15:00:50.190058  484973 api_server.go:72] duration metric: took 10.141918792s to wait for apiserver process to appear ...
	I1121 15:00:50.190113  484973 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:00:50.190163  484973 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:00:50.192923  484973 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-902161 addons enable metrics-server
	
	I1121 15:00:50.196063  484973 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1121 15:00:50.199077  484973 addons.go:530] duration metric: took 10.150477121s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1121 15:00:50.220353  484973 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 15:00:50.221705  484973 api_server.go:141] control plane version: v1.34.1
	I1121 15:00:50.221729  484973 api_server.go:131] duration metric: took 31.592684ms to wait for apiserver health ...
	I1121 15:00:50.221739  484973 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:00:50.235520  484973 system_pods.go:59] 8 kube-system pods found
	I1121 15:00:50.235616  484973 system_pods.go:61] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:50.235642  484973 system_pods.go:61] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:00:50.235675  484973 system_pods.go:61] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:50.235703  484973 system_pods.go:61] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 15:00:50.235727  484973 system_pods.go:61] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 15:00:50.235760  484973 system_pods.go:61] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:50.235789  484973 system_pods.go:61] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:00:50.235806  484973 system_pods.go:61] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Running
	I1121 15:00:50.235842  484973 system_pods.go:74] duration metric: took 14.0801ms to wait for pod list to return data ...
	I1121 15:00:50.235867  484973 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:00:50.245066  484973 default_sa.go:45] found service account: "default"
	I1121 15:00:50.245143  484973 default_sa.go:55] duration metric: took 9.2567ms for default service account to be created ...
	I1121 15:00:50.245167  484973 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 15:00:50.258793  484973 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:50.258882  484973 system_pods.go:89] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:50.258906  484973 system_pods.go:89] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:00:50.258944  484973 system_pods.go:89] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:50.258969  484973 system_pods.go:89] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 15:00:50.258990  484973 system_pods.go:89] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 15:00:50.259009  484973 system_pods.go:89] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:50.259041  484973 system_pods.go:89] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:00:50.259065  484973 system_pods.go:89] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Running
	I1121 15:00:50.259085  484973 system_pods.go:126] duration metric: took 13.900587ms to wait for k8s-apps to be running ...
	I1121 15:00:50.259118  484973 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 15:00:50.259208  484973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:00:50.346755  484973 system_svc.go:56] duration metric: took 87.628568ms WaitForService to wait for kubelet
	I1121 15:00:50.346837  484973 kubeadm.go:587] duration metric: took 10.298698255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:00:50.346871  484973 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:00:50.351286  484973 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:00:50.351367  484973 node_conditions.go:123] node cpu capacity is 2
	I1121 15:00:50.351396  484973 node_conditions.go:105] duration metric: took 4.506957ms to run NodePressure ...
	I1121 15:00:50.351422  484973 start.go:242] waiting for startup goroutines ...
	I1121 15:00:50.351454  484973 start.go:247] waiting for cluster config update ...
	I1121 15:00:50.351483  484973 start.go:256] writing updated cluster config ...
	I1121 15:00:50.351825  484973 ssh_runner.go:195] Run: rm -f paused
	I1121 15:00:50.356663  484973 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:00:50.363303  484973 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gttll" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 15:00:49.682183  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:52.182446  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:52.368672  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:00:54.869535  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:00:54.682057  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:57.181621  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:57.372469  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:00:59.870659  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:00:59.683571  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:01:01.695669  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:01:01.881980  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:04.369814  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:04.181512  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:01:06.182186  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:01:06.869541  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:08.869871  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:08.681773  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:01:10.682804  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	I1121 15:01:11.681627  483158 pod_ready.go:94] pod "coredns-66bc5c9577-2mqjs" is "Ready"
	I1121 15:01:11.681700  483158 pod_ready.go:86] duration metric: took 37.505576347s for pod "coredns-66bc5c9577-2mqjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:11.685448  483158 pod_ready.go:83] waiting for pod "etcd-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:11.690530  483158 pod_ready.go:94] pod "etcd-no-preload-844780" is "Ready"
	I1121 15:01:11.690560  483158 pod_ready.go:86] duration metric: took 5.081562ms for pod "etcd-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:11.693254  483158 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:11.698788  483158 pod_ready.go:94] pod "kube-apiserver-no-preload-844780" is "Ready"
	I1121 15:01:11.698816  483158 pod_ready.go:86] duration metric: took 5.538313ms for pod "kube-apiserver-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:11.701176  483158 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:11.879777  483158 pod_ready.go:94] pod "kube-controller-manager-no-preload-844780" is "Ready"
	I1121 15:01:11.879808  483158 pod_ready.go:86] duration metric: took 178.607383ms for pod "kube-controller-manager-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:12.079865  483158 pod_ready.go:83] waiting for pod "kube-proxy-2zwvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:12.479525  483158 pod_ready.go:94] pod "kube-proxy-2zwvg" is "Ready"
	I1121 15:01:12.479582  483158 pod_ready.go:86] duration metric: took 399.68353ms for pod "kube-proxy-2zwvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:12.680190  483158 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:13.079558  483158 pod_ready.go:94] pod "kube-scheduler-no-preload-844780" is "Ready"
	I1121 15:01:13.079586  483158 pod_ready.go:86] duration metric: took 399.362467ms for pod "kube-scheduler-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:13.079639  483158 pod_ready.go:40] duration metric: took 38.914749518s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:01:13.147429  483158 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 15:01:13.150317  483158 out.go:179] * Done! kubectl is now configured to use "no-preload-844780" cluster and "default" namespace by default
	W1121 15:01:11.370147  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:13.868693  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:15.869074  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:18.369824  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:20.874083  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:23.369678  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:25.373699  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.885929466Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c9d7c128-3ef1-4f83-b087-7f45d928b862 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.886921445Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=22b9e420-5f0d-4290-a0df-56ae2bf9b420 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.887068991Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.895890675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.896099046Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/54d1cac443932b1b9428831edcdee248bc1945dc635b93051921db5415d0bea8/merged/etc/passwd: no such file or directory"
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.896129504Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/54d1cac443932b1b9428831edcdee248bc1945dc635b93051921db5415d0bea8/merged/etc/group: no such file or directory"
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.896415834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.917427986Z" level=info msg="Created container 8e46ba8bffec92e3a6028389dbfdba8e09d0cd4bba5e18e2bdb7c932bdc655ad: kube-system/storage-provisioner/storage-provisioner" id=22b9e420-5f0d-4290-a0df-56ae2bf9b420 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.918546801Z" level=info msg="Starting container: 8e46ba8bffec92e3a6028389dbfdba8e09d0cd4bba5e18e2bdb7c932bdc655ad" id=595d57c7-2739-4c4c-899b-9e37d478e849 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.920737342Z" level=info msg="Started container" PID=1631 containerID=8e46ba8bffec92e3a6028389dbfdba8e09d0cd4bba5e18e2bdb7c932bdc655ad description=kube-system/storage-provisioner/storage-provisioner id=595d57c7-2739-4c4c-899b-9e37d478e849 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e3e7b601f74d3c025a16faa58b55735d69b321893d1876f06381a31ecf7a705
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.804596294Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.808793266Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.808844943Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.80886946Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.813181075Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.813215874Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.813239251Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.816419185Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.816452884Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.816473496Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.819788964Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.819823484Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.819856469Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.823911245Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.823948225Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8e46ba8bffec9       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago       Running             storage-provisioner         2                   3e3e7b601f74d       storage-provisioner                          kube-system
	750148acce1d8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   331b6a592418b       dashboard-metrics-scraper-6ffb444bf9-kg884   kubernetes-dashboard
	803b6f25723ab       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago       Running             kubernetes-dashboard        0                   788ce08e5213d       kubernetes-dashboard-855c9754f9-6gjq5        kubernetes-dashboard
	91214642a2e4d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   8370aa5d83668       coredns-66bc5c9577-2mqjs                     kube-system
	b8a665b3a7370       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   2a984946bb90c       busybox                                      default
	092aa66ca872f       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           54 seconds ago       Exited              storage-provisioner         1                   3e3e7b601f74d       storage-provisioner                          kube-system
	25bca3969133f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   e892fb499a45c       kube-proxy-2zwvg                             kube-system
	ae6f69c7f5749       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   fc5b4056b8ad3       kindnet-whwj8                                kube-system
	f28d6ebe45c65       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   c719f7bb9d1d7       etcd-no-preload-844780                       kube-system
	4de5050f30939       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   bb25d0f582fee       kube-apiserver-no-preload-844780             kube-system
	8ef8bdf61c8fb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   080de2f635bd2       kube-controller-manager-no-preload-844780    kube-system
	b93ce5f43f1f5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   6fec832e5e215       kube-scheduler-no-preload-844780             kube-system
	
	
	==> coredns [91214642a2e4d239b7aa08e3a3850f1413ec65232ed5fed18be3647fe444771c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45889 - 23168 "HINFO IN 6519282933529723802.8562960686694559870. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032625788s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-844780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-844780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=no-preload-844780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_59_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:59:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-844780
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 15:01:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 15:01:13 +0000   Fri, 21 Nov 2025 14:59:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 15:01:13 +0000   Fri, 21 Nov 2025 14:59:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 15:01:13 +0000   Fri, 21 Nov 2025 14:59:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 15:01:13 +0000   Fri, 21 Nov 2025 14:59:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-844780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                0ed5c352-e309-429b-9135-9dfa2d81a7b2
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-2mqjs                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-no-preload-844780                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-whwj8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-844780              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-844780     200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-2zwvg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-844780              100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kg884    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6gjq5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 111s               kube-proxy       
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 2m6s               kubelet          Starting kubelet.
	  Warning  CgroupV1                 118s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  118s               kubelet          Node no-preload-844780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    118s               kubelet          Node no-preload-844780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     118s               kubelet          Node no-preload-844780 status is now: NodeHasSufficientPID
	  Normal   Starting                 118s               kubelet          Starting kubelet.
	  Normal   RegisteredNode           114s               node-controller  Node no-preload-844780 event: Registered Node no-preload-844780 in Controller
	  Normal   NodeReady                99s                kubelet          Node no-preload-844780 status is now: NodeReady
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node no-preload-844780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node no-preload-844780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node no-preload-844780 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                node-controller  Node no-preload-844780 event: Registered Node no-preload-844780 in Controller
	
	
	==> dmesg <==
	[Nov21 14:36] overlayfs: idmapped layers are currently not supported
	[Nov21 14:37] overlayfs: idmapped layers are currently not supported
	[Nov21 14:39] overlayfs: idmapped layers are currently not supported
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	[Nov21 14:58] overlayfs: idmapped layers are currently not supported
	[Nov21 14:59] overlayfs: idmapped layers are currently not supported
	[Nov21 15:00] overlayfs: idmapped layers are currently not supported
	[ +13.392744] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f28d6ebe45c6536743a007a0f7945e8b16f3fabfc7bfef52a4f2c46fd0f649b8] <==
	{"level":"warn","ts":"2025-11-21T15:00:29.252700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.261529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.291405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.322411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.353359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.392067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.428360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.447601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.496709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.563139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.598924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.623104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.663011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.677780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.728560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.759618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.783400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.864834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.906868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.971989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.997283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:30.032727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:30.080365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:30.118990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:30.256911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51068","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:01:28 up  2:43,  0 user,  load average: 3.61, 3.31, 2.71
	Linux no-preload-844780 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ae6f69c7f5749ac30e241e7543f9fb184fc24b729c4951ad03bbd80c5b5f834e] <==
	I1121 15:00:33.548738       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 15:00:33.604975       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 15:00:33.605160       1 main.go:148] setting mtu 1500 for CNI 
	I1121 15:00:33.605175       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 15:00:33.605187       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T15:00:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 15:00:33.805442       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 15:00:33.812979       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 15:00:33.813003       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 15:00:33.813540       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 15:01:03.804245       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 15:01:03.806416       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 15:01:03.813981       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 15:01:03.813981       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1121 15:01:05.213607       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 15:01:05.213639       1 metrics.go:72] Registering metrics
	I1121 15:01:05.213714       1 controller.go:711] "Syncing nftables rules"
	I1121 15:01:13.804290       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 15:01:13.804340       1 main.go:301] handling current node
	I1121 15:01:23.812205       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 15:01:23.812241       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4de5050f30939399107806067ae7c00df56197c6f825d9cbdd2926418c8dfb1c] <==
	I1121 15:00:32.306020       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 15:00:32.306160       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1121 15:00:32.316917       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1121 15:00:32.316993       1 aggregator.go:171] initial CRD sync complete...
	I1121 15:00:32.317003       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 15:00:32.317022       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 15:00:32.317029       1 cache.go:39] Caches are synced for autoregister controller
	I1121 15:00:32.317284       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 15:00:32.317298       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 15:00:32.317433       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1121 15:00:32.317485       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 15:00:32.327516       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1121 15:00:32.334366       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 15:00:32.351206       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 15:00:32.644785       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 15:00:32.740986       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 15:00:33.791728       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 15:00:33.931588       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 15:00:33.986773       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 15:00:34.006032       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 15:00:34.090864       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.232.191"}
	I1121 15:00:34.107434       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.233.26"}
	I1121 15:00:35.810726       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 15:00:36.060749       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 15:00:36.112057       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8ef8bdf61c8fb8bd85b829ec545c408d5a4aedca375c5a8696c35c65e8c4bb35] <==
	I1121 15:00:35.657121       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 15:00:35.657157       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 15:00:35.657383       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 15:00:35.657427       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 15:00:35.657532       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 15:00:35.657542       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 15:00:35.660007       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 15:00:35.660081       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:00:35.661329       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 15:00:35.662153       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 15:00:35.668084       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 15:00:35.668235       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 15:00:35.668905       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 15:00:35.670763       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 15:00:35.673419       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-844780"
	I1121 15:00:35.673997       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1121 15:00:35.673519       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 15:00:35.671609       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 15:00:35.677373       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 15:00:35.690715       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:00:35.692691       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:00:35.692718       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 15:00:35.692726       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 15:00:35.703722       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 15:00:35.714102       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [25bca3969133f7a61c59e66d79a502421490cba2b74f6e2402d9413554a4e50c] <==
	I1121 15:00:34.022353       1 server_linux.go:53] "Using iptables proxy"
	I1121 15:00:34.222293       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 15:00:34.324636       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 15:00:34.324677       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 15:00:34.324762       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 15:00:34.379086       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 15:00:34.379155       1 server_linux.go:132] "Using iptables Proxier"
	I1121 15:00:34.383872       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 15:00:34.384237       1 server.go:527] "Version info" version="v1.34.1"
	I1121 15:00:34.384264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:00:34.385723       1 config.go:200] "Starting service config controller"
	I1121 15:00:34.385748       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 15:00:34.385767       1 config.go:106] "Starting endpoint slice config controller"
	I1121 15:00:34.385771       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 15:00:34.385781       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 15:00:34.385786       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 15:00:34.386903       1 config.go:309] "Starting node config controller"
	I1121 15:00:34.386923       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 15:00:34.386931       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 15:00:34.486454       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 15:00:34.486494       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 15:00:34.486534       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b93ce5f43f1f5d952f34aacec278e0b6e010e0f24f89fe665a79bb363f7b369e] <==
	I1121 15:00:30.256250       1 serving.go:386] Generated self-signed cert in-memory
	I1121 15:00:34.226497       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 15:00:34.226537       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:00:34.241094       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 15:00:34.241390       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 15:00:34.241472       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 15:00:34.241534       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 15:00:34.243587       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:00:34.243676       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:00:34.243736       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:00:34.243777       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:00:34.341595       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 15:00:34.343976       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:00:34.344085       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 15:00:36 no-preload-844780 kubelet[773]: I1121 15:00:36.360883     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjm8t\" (UniqueName: \"kubernetes.io/projected/9bd7c0e3-0f0c-48da-b343-e3b558c82dcc-kube-api-access-mjm8t\") pod \"kubernetes-dashboard-855c9754f9-6gjq5\" (UID: \"9bd7c0e3-0f0c-48da-b343-e3b558c82dcc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6gjq5"
	Nov 21 15:00:36 no-preload-844780 kubelet[773]: I1121 15:00:36.360905     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9fc3bebd-775a-4d09-947e-e26400dfc4e3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-kg884\" (UID: \"9fc3bebd-775a-4d09-947e-e26400dfc4e3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884"
	Nov 21 15:00:36 no-preload-844780 kubelet[773]: W1121 15:00:36.614665     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/crio-331b6a592418b3601235c1776b40b90a0912c9e099b037cd7b6c2c1b35155bcd WatchSource:0}: Error finding container 331b6a592418b3601235c1776b40b90a0912c9e099b037cd7b6c2c1b35155bcd: Status 404 returned error can't find the container with id 331b6a592418b3601235c1776b40b90a0912c9e099b037cd7b6c2c1b35155bcd
	Nov 21 15:00:41 no-preload-844780 kubelet[773]: I1121 15:00:41.542976     773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 21 15:00:43 no-preload-844780 kubelet[773]: I1121 15:00:43.791398     773 scope.go:117] "RemoveContainer" containerID="bad1570406d2df1346ce65231982805f4da0a372fc55ac0b494f8a5f8320e6da"
	Nov 21 15:00:44 no-preload-844780 kubelet[773]: I1121 15:00:44.801155     773 scope.go:117] "RemoveContainer" containerID="bad1570406d2df1346ce65231982805f4da0a372fc55ac0b494f8a5f8320e6da"
	Nov 21 15:00:44 no-preload-844780 kubelet[773]: I1121 15:00:44.801447     773 scope.go:117] "RemoveContainer" containerID="4d8275dc9c9fbe9b9ef555eaaaded5f35d300457439cdea3f7c58ec32a2b3dd5"
	Nov 21 15:00:44 no-preload-844780 kubelet[773]: E1121 15:00:44.801587     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kg884_kubernetes-dashboard(9fc3bebd-775a-4d09-947e-e26400dfc4e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884" podUID="9fc3bebd-775a-4d09-947e-e26400dfc4e3"
	Nov 21 15:00:45 no-preload-844780 kubelet[773]: I1121 15:00:45.823974     773 scope.go:117] "RemoveContainer" containerID="4d8275dc9c9fbe9b9ef555eaaaded5f35d300457439cdea3f7c58ec32a2b3dd5"
	Nov 21 15:00:45 no-preload-844780 kubelet[773]: E1121 15:00:45.824335     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kg884_kubernetes-dashboard(9fc3bebd-775a-4d09-947e-e26400dfc4e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884" podUID="9fc3bebd-775a-4d09-947e-e26400dfc4e3"
	Nov 21 15:00:46 no-preload-844780 kubelet[773]: I1121 15:00:46.823722     773 scope.go:117] "RemoveContainer" containerID="4d8275dc9c9fbe9b9ef555eaaaded5f35d300457439cdea3f7c58ec32a2b3dd5"
	Nov 21 15:00:46 no-preload-844780 kubelet[773]: E1121 15:00:46.823891     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kg884_kubernetes-dashboard(9fc3bebd-775a-4d09-947e-e26400dfc4e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884" podUID="9fc3bebd-775a-4d09-947e-e26400dfc4e3"
	Nov 21 15:01:01 no-preload-844780 kubelet[773]: I1121 15:01:01.664731     773 scope.go:117] "RemoveContainer" containerID="4d8275dc9c9fbe9b9ef555eaaaded5f35d300457439cdea3f7c58ec32a2b3dd5"
	Nov 21 15:01:01 no-preload-844780 kubelet[773]: I1121 15:01:01.873365     773 scope.go:117] "RemoveContainer" containerID="750148acce1d8a59f9fb4ba7b5e591406908b5ebfce4337f15a074d55153c1ee"
	Nov 21 15:01:01 no-preload-844780 kubelet[773]: I1121 15:01:01.873766     773 scope.go:117] "RemoveContainer" containerID="4d8275dc9c9fbe9b9ef555eaaaded5f35d300457439cdea3f7c58ec32a2b3dd5"
	Nov 21 15:01:01 no-preload-844780 kubelet[773]: E1121 15:01:01.877693     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kg884_kubernetes-dashboard(9fc3bebd-775a-4d09-947e-e26400dfc4e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884" podUID="9fc3bebd-775a-4d09-947e-e26400dfc4e3"
	Nov 21 15:01:01 no-preload-844780 kubelet[773]: I1121 15:01:01.912041     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6gjq5" podStartSLOduration=10.748408544 podStartE2EDuration="25.912015385s" podCreationTimestamp="2025-11-21 15:00:36 +0000 UTC" firstStartedPulling="2025-11-21 15:00:36.654568983 +0000 UTC m=+11.237252027" lastFinishedPulling="2025-11-21 15:00:51.818175833 +0000 UTC m=+26.400858868" observedRunningTime="2025-11-21 15:00:52.863503643 +0000 UTC m=+27.446186695" watchObservedRunningTime="2025-11-21 15:01:01.912015385 +0000 UTC m=+36.494698421"
	Nov 21 15:01:04 no-preload-844780 kubelet[773]: I1121 15:01:04.883950     773 scope.go:117] "RemoveContainer" containerID="092aa66ca872f777a9f7bb1165f836461731b4ead738841303293cb5d0367e17"
	Nov 21 15:01:06 no-preload-844780 kubelet[773]: I1121 15:01:06.565278     773 scope.go:117] "RemoveContainer" containerID="750148acce1d8a59f9fb4ba7b5e591406908b5ebfce4337f15a074d55153c1ee"
	Nov 21 15:01:06 no-preload-844780 kubelet[773]: E1121 15:01:06.565467     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kg884_kubernetes-dashboard(9fc3bebd-775a-4d09-947e-e26400dfc4e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884" podUID="9fc3bebd-775a-4d09-947e-e26400dfc4e3"
	Nov 21 15:01:19 no-preload-844780 kubelet[773]: I1121 15:01:19.663564     773 scope.go:117] "RemoveContainer" containerID="750148acce1d8a59f9fb4ba7b5e591406908b5ebfce4337f15a074d55153c1ee"
	Nov 21 15:01:19 no-preload-844780 kubelet[773]: E1121 15:01:19.663801     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kg884_kubernetes-dashboard(9fc3bebd-775a-4d09-947e-e26400dfc4e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884" podUID="9fc3bebd-775a-4d09-947e-e26400dfc4e3"
	Nov 21 15:01:25 no-preload-844780 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 15:01:25 no-preload-844780 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 15:01:25 no-preload-844780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [803b6f25723ab454823682f0924b288c077286398a7e98198fd3b7bd5f286fc6] <==
	2025/11/21 15:00:51 Starting overwatch
	2025/11/21 15:00:51 Using namespace: kubernetes-dashboard
	2025/11/21 15:00:51 Using in-cluster config to connect to apiserver
	2025/11/21 15:00:51 Using secret token for csrf signing
	2025/11/21 15:00:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 15:00:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 15:00:51 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 15:00:51 Generating JWE encryption key
	2025/11/21 15:00:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 15:00:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 15:00:52 Initializing JWE encryption key from synchronized object
	2025/11/21 15:00:52 Creating in-cluster Sidecar client
	2025/11/21 15:00:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 15:00:52 Serving insecurely on HTTP port: 9090
	2025/11/21 15:01:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [092aa66ca872f777a9f7bb1165f836461731b4ead738841303293cb5d0367e17] <==
	I1121 15:00:33.934087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 15:01:03.940801       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8e46ba8bffec92e3a6028389dbfdba8e09d0cd4bba5e18e2bdb7c932bdc655ad] <==
	I1121 15:01:04.935848       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 15:01:04.952578       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 15:01:04.952629       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 15:01:04.956085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:08.411267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:12.671270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:16.269222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:19.323522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:22.346215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:22.353590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:01:22.353778       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 15:01:22.354360       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dbf9e551-e0be-48eb-aa2e-8e3a20e98a71", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-844780_40f1a25f-ef85-4e49-9dd6-f366efa78e6c became leader
	I1121 15:01:22.354648       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-844780_40f1a25f-ef85-4e49-9dd6-f366efa78e6c!
	W1121 15:01:22.359683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:22.370926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:01:22.455481       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-844780_40f1a25f-ef85-4e49-9dd6-f366efa78e6c!
	W1121 15:01:24.373364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:24.384673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:26.388342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:26.393067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:28.396829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:28.402380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-844780 -n no-preload-844780
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-844780 -n no-preload-844780: exit status 2 (365.007931ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-844780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-844780
helpers_test.go:243: (dbg) docker inspect no-preload-844780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460",
	        "Created": "2025-11-21T14:58:39.813840429Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 483334,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T15:00:18.47976798Z",
	            "FinishedAt": "2025-11-21T15:00:17.329526483Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/hosts",
	        "LogPath": "/var/lib/docker/containers/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460-json.log",
	        "Name": "/no-preload-844780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-844780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-844780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460",
	                "LowerDir": "/var/lib/docker/overlay2/30aebe0b3ca4716483bf95fa926217cb813474aa3eaf00d1a3a2b419e8a46c7b-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30aebe0b3ca4716483bf95fa926217cb813474aa3eaf00d1a3a2b419e8a46c7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30aebe0b3ca4716483bf95fa926217cb813474aa3eaf00d1a3a2b419e8a46c7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30aebe0b3ca4716483bf95fa926217cb813474aa3eaf00d1a3a2b419e8a46c7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-844780",
	                "Source": "/var/lib/docker/volumes/no-preload-844780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-844780",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-844780",
	                "name.minikube.sigs.k8s.io": "no-preload-844780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "160be835c315be2dfb0fdf5e27dd061f73e9563d919efe89507daf8cb5996121",
	            "SandboxKey": "/var/run/docker/netns/160be835c315",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-844780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:17:d3:91:6d:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "beccd80047d00ade7f2a91d5b368d7f2498703ce72d6db7bd114ead62561b75b",
	                    "EndpointID": "b7cc625ffd7cca8046391d90e6b19fffcd7e38ea4cff15140ccfc01d3ff1b2e4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-844780",
	                        "8e592d0d77ca"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-844780 -n no-preload-844780
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-844780 -n no-preload-844780: exit status 2 (368.296518ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-844780 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-844780 logs -n 25: (1.517990233s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-605096                                                                                                                                                                                                                        │ cert-options-605096          │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:55 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:55 UTC │ 21 Nov 25 14:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-357479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │                     │
	│ stop    │ -p old-k8s-version-357479 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-357479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:57 UTC │
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ image   │ old-k8s-version-357479 image list --format=json                                                                                                                                                                                               │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ pause   │ -p old-k8s-version-357479 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │                     │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p cert-expiration-304879                                                                                                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 15:00 UTC │
	│ delete  │ -p disable-driver-mounts-984933                                                                                                                                                                                                               │ disable-driver-mounts-984933 │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-844780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ stop    │ -p no-preload-844780 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ addons  │ enable metrics-server -p embed-certs-902161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-844780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ stop    │ -p embed-certs-902161 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-902161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ image   │ no-preload-844780 image list --format=json                                                                                                                                                                                                    │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p no-preload-844780 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 15:00:30
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 15:00:30.593525  484973 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:00:30.593637  484973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:00:30.593666  484973 out.go:374] Setting ErrFile to fd 2...
	I1121 15:00:30.593672  484973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:00:30.593953  484973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:00:30.596642  484973 out.go:368] Setting JSON to false
	I1121 15:00:30.597614  484973 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9782,"bootTime":1763727448,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 15:00:30.597691  484973 start.go:143] virtualization:  
	I1121 15:00:30.600675  484973 out.go:179] * [embed-certs-902161] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 15:00:30.604492  484973 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 15:00:30.604589  484973 notify.go:221] Checking for updates...
	I1121 15:00:30.610105  484973 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 15:00:30.612902  484973 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:00:30.615730  484973 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 15:00:30.618618  484973 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 15:00:30.621567  484973 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 15:00:30.624780  484973 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:00:30.625367  484973 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 15:00:30.680640  484973 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 15:00:30.680822  484973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:00:30.796967  484973 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 15:00:30.786510982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:00:30.797070  484973 docker.go:319] overlay module found
	I1121 15:00:30.800284  484973 out.go:179] * Using the docker driver based on existing profile
	I1121 15:00:30.803085  484973 start.go:309] selected driver: docker
	I1121 15:00:30.803109  484973 start.go:930] validating driver "docker" against &{Name:embed-certs-902161 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:00:30.803225  484973 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 15:00:30.803970  484973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:00:30.903176  484973 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 15:00:30.893036941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:00:30.903512  484973 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:00:30.903538  484973 cni.go:84] Creating CNI manager for ""
	I1121 15:00:30.903589  484973 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:00:30.903627  484973 start.go:353] cluster config:
	{Name:embed-certs-902161 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:00:30.908908  484973 out.go:179] * Starting "embed-certs-902161" primary control-plane node in "embed-certs-902161" cluster
	I1121 15:00:30.911778  484973 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 15:00:30.914595  484973 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 15:00:30.917417  484973 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:00:30.917464  484973 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 15:00:30.917476  484973 cache.go:65] Caching tarball of preloaded images
	I1121 15:00:30.917562  484973 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 15:00:30.917571  484973 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 15:00:30.917688  484973 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/config.json ...
	I1121 15:00:30.917914  484973 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 15:00:30.945158  484973 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 15:00:30.945177  484973 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 15:00:30.945190  484973 cache.go:243] Successfully downloaded all kic artifacts
	I1121 15:00:30.945226  484973 start.go:360] acquireMachinesLock for embed-certs-902161: {Name:mk52b2685f312e9983127cfd2341df0728e188b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 15:00:30.945281  484973 start.go:364] duration metric: took 36.128µs to acquireMachinesLock for "embed-certs-902161"
	I1121 15:00:30.945299  484973 start.go:96] Skipping create...Using existing machine configuration
	I1121 15:00:30.945305  484973 fix.go:54] fixHost starting: 
	I1121 15:00:30.945555  484973 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:00:30.974678  484973 fix.go:112] recreateIfNeeded on embed-certs-902161: state=Stopped err=<nil>
	W1121 15:00:30.974707  484973 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 15:00:32.252284  483158 node_ready.go:49] node "no-preload-844780" is "Ready"
	I1121 15:00:32.252318  483158 node_ready.go:38] duration metric: took 5.535797122s for node "no-preload-844780" to be "Ready" ...
	I1121 15:00:32.252337  483158 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:00:32.252426  483158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:00:34.031685  483158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.275164059s)
	I1121 15:00:34.031769  483158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.258242768s)
	I1121 15:00:34.114545  483158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.924423153s)
	I1121 15:00:34.114820  483158 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.862380689s)
	I1121 15:00:34.114863  483158 api_server.go:72] duration metric: took 7.774802246s to wait for apiserver process to appear ...
	I1121 15:00:34.114898  483158 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:00:34.114939  483158 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 15:00:34.117927  483158 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-844780 addons enable metrics-server
	
	I1121 15:00:34.120687  483158 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1121 15:00:30.977945  484973 out.go:252] * Restarting existing docker container for "embed-certs-902161" ...
	I1121 15:00:30.978046  484973 cli_runner.go:164] Run: docker start embed-certs-902161
	I1121 15:00:31.377148  484973 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:00:31.404630  484973 kic.go:430] container "embed-certs-902161" state is running.
	I1121 15:00:31.405045  484973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-902161
	I1121 15:00:31.432707  484973 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/config.json ...
	I1121 15:00:31.432951  484973 machine.go:94] provisionDockerMachine start ...
	I1121 15:00:31.433013  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:31.460136  484973 main.go:143] libmachine: Using SSH client type: native
	I1121 15:00:31.460486  484973 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1121 15:00:31.460496  484973 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 15:00:31.461501  484973 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 15:00:34.620031  484973 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-902161
	
	I1121 15:00:34.620056  484973 ubuntu.go:182] provisioning hostname "embed-certs-902161"
	I1121 15:00:34.620153  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:34.643909  484973 main.go:143] libmachine: Using SSH client type: native
	I1121 15:00:34.644224  484973 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1121 15:00:34.644242  484973 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-902161 && echo "embed-certs-902161" | sudo tee /etc/hostname
	I1121 15:00:34.823624  484973 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-902161
	
	I1121 15:00:34.823722  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:34.843196  484973 main.go:143] libmachine: Using SSH client type: native
	I1121 15:00:34.843512  484973 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1121 15:00:34.843539  484973 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-902161' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-902161/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-902161' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 15:00:34.990853  484973 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 15:00:34.990879  484973 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 15:00:34.990922  484973 ubuntu.go:190] setting up certificates
	I1121 15:00:34.990938  484973 provision.go:84] configureAuth start
	I1121 15:00:34.991013  484973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-902161
	I1121 15:00:35.028966  484973 provision.go:143] copyHostCerts
	I1121 15:00:35.029046  484973 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 15:00:35.029071  484973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 15:00:35.029159  484973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 15:00:35.029288  484973 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 15:00:35.029302  484973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 15:00:35.029335  484973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 15:00:35.029413  484973 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 15:00:35.029423  484973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 15:00:35.029451  484973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 15:00:35.029520  484973 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.embed-certs-902161 san=[127.0.0.1 192.168.76.2 embed-certs-902161 localhost minikube]
	I1121 15:00:35.316146  484973 provision.go:177] copyRemoteCerts
	I1121 15:00:35.316223  484973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 15:00:35.316266  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:35.337248  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:35.444866  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 15:00:35.468245  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1121 15:00:35.487779  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 15:00:35.510427  484973 provision.go:87] duration metric: took 519.468134ms to configureAuth
	I1121 15:00:35.510455  484973 ubuntu.go:206] setting minikube options for container-runtime
	I1121 15:00:35.510685  484973 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:00:35.510815  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:35.543699  484973 main.go:143] libmachine: Using SSH client type: native
	I1121 15:00:35.544004  484973 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1121 15:00:35.544018  484973 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 15:00:35.955786  484973 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 15:00:35.955861  484973 machine.go:97] duration metric: took 4.522897923s to provisionDockerMachine
	I1121 15:00:35.955887  484973 start.go:293] postStartSetup for "embed-certs-902161" (driver="docker")
	I1121 15:00:35.955912  484973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 15:00:35.956011  484973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 15:00:35.956067  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:35.980891  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:36.089496  484973 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 15:00:36.092998  484973 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 15:00:36.093032  484973 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 15:00:36.093045  484973 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 15:00:36.093107  484973 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 15:00:36.093219  484973 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 15:00:36.093326  484973 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 15:00:36.105328  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:00:36.131615  484973 start.go:296] duration metric: took 175.692953ms for postStartSetup
	I1121 15:00:36.131710  484973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 15:00:36.131764  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:36.159512  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:36.265874  484973 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 15:00:36.272999  484973 fix.go:56] duration metric: took 5.327685783s for fixHost
	I1121 15:00:36.273023  484973 start.go:83] releasing machines lock for "embed-certs-902161", held for 5.327734325s
	I1121 15:00:36.273098  484973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-902161
	I1121 15:00:36.296648  484973 ssh_runner.go:195] Run: cat /version.json
	I1121 15:00:36.296698  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:36.296713  484973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 15:00:36.296766  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:36.342540  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:36.354692  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:36.444262  484973 ssh_runner.go:195] Run: systemctl --version
	I1121 15:00:36.542925  484973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 15:00:36.595084  484973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 15:00:36.600518  484973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 15:00:36.600633  484973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 15:00:36.609885  484973 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 15:00:36.609954  484973 start.go:496] detecting cgroup driver to use...
	I1121 15:00:36.609999  484973 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 15:00:36.610073  484973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 15:00:36.635625  484973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 15:00:36.652984  484973 docker.go:218] disabling cri-docker service (if available) ...
	I1121 15:00:36.653108  484973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 15:00:36.674892  484973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 15:00:36.694370  484973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 15:00:36.847552  484973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 15:00:36.978274  484973 docker.go:234] disabling docker service ...
	I1121 15:00:36.978360  484973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 15:00:36.996673  484973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 15:00:37.013748  484973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 15:00:37.178664  484973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 15:00:37.366392  484973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 15:00:37.381058  484973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 15:00:37.398935  484973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 15:00:37.399008  484973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.409728  484973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 15:00:37.409849  484973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.419482  484973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.428858  484973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.437873  484973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 15:00:37.446537  484973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.455910  484973 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.464759  484973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:00:37.474131  484973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 15:00:37.483070  484973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 15:00:37.492442  484973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:00:37.666882  484973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 15:00:37.882061  484973 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 15:00:37.882143  484973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 15:00:37.890867  484973 start.go:564] Will wait 60s for crictl version
	I1121 15:00:37.890959  484973 ssh_runner.go:195] Run: which crictl
	I1121 15:00:37.896736  484973 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 15:00:37.959510  484973 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 15:00:37.959631  484973 ssh_runner.go:195] Run: crio --version
	I1121 15:00:38.000360  484973 ssh_runner.go:195] Run: crio --version
	I1121 15:00:38.055503  484973 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 15:00:34.123500  483158 addons.go:530] duration metric: took 7.783041646s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1121 15:00:34.123983  483158 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 15:00:34.125067  483158 api_server.go:141] control plane version: v1.34.1
	I1121 15:00:34.125096  483158 api_server.go:131] duration metric: took 10.179303ms to wait for apiserver health ...
	I1121 15:00:34.125106  483158 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:00:34.128558  483158 system_pods.go:59] 8 kube-system pods found
	I1121 15:00:34.128608  483158 system_pods.go:61] "coredns-66bc5c9577-2mqjs" [96d5956d-d71f-4509-86fe-94f9c8b6832a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:34.128618  483158 system_pods.go:61] "etcd-no-preload-844780" [17c66826-5545-4905-9ef9-a63dc8cc8fa6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:00:34.128624  483158 system_pods.go:61] "kindnet-whwj8" [66ed1cd4-bb39-4b0f-b52e-a4061329e72b] Running
	I1121 15:00:34.128634  483158 system_pods.go:61] "kube-apiserver-no-preload-844780" [b286018d-5cad-4c67-9c97-7853c5c9eef3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 15:00:34.128641  483158 system_pods.go:61] "kube-controller-manager-no-preload-844780" [0005e01e-7c78-4ee6-a294-7a321177ed07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 15:00:34.128648  483158 system_pods.go:61] "kube-proxy-2zwvg" [26e02c8a-4f48-4406-8a0c-05fc4951a8c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1121 15:00:34.128659  483158 system_pods.go:61] "kube-scheduler-no-preload-844780" [c5aa6f84-0262-4786-9ba4-b0149e3bc8bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:00:34.128667  483158 system_pods.go:61] "storage-provisioner" [01c5a82c-94b5-42d1-8159-096f9fdca84a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:00:34.128675  483158 system_pods.go:74] duration metric: took 3.563858ms to wait for pod list to return data ...
	I1121 15:00:34.128685  483158 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:00:34.131699  483158 default_sa.go:45] found service account: "default"
	I1121 15:00:34.131728  483158 default_sa.go:55] duration metric: took 3.031757ms for default service account to be created ...
	I1121 15:00:34.131737  483158 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 15:00:34.139170  483158 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:34.139215  483158 system_pods.go:89] "coredns-66bc5c9577-2mqjs" [96d5956d-d71f-4509-86fe-94f9c8b6832a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:34.139226  483158 system_pods.go:89] "etcd-no-preload-844780" [17c66826-5545-4905-9ef9-a63dc8cc8fa6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:00:34.139234  483158 system_pods.go:89] "kindnet-whwj8" [66ed1cd4-bb39-4b0f-b52e-a4061329e72b] Running
	I1121 15:00:34.139241  483158 system_pods.go:89] "kube-apiserver-no-preload-844780" [b286018d-5cad-4c67-9c97-7853c5c9eef3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 15:00:34.139251  483158 system_pods.go:89] "kube-controller-manager-no-preload-844780" [0005e01e-7c78-4ee6-a294-7a321177ed07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 15:00:34.139262  483158 system_pods.go:89] "kube-proxy-2zwvg" [26e02c8a-4f48-4406-8a0c-05fc4951a8c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1121 15:00:34.139268  483158 system_pods.go:89] "kube-scheduler-no-preload-844780" [c5aa6f84-0262-4786-9ba4-b0149e3bc8bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:00:34.139285  483158 system_pods.go:89] "storage-provisioner" [01c5a82c-94b5-42d1-8159-096f9fdca84a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:00:34.139292  483158 system_pods.go:126] duration metric: took 7.550136ms to wait for k8s-apps to be running ...
	I1121 15:00:34.139301  483158 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 15:00:34.139382  483158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:00:34.155237  483158 system_svc.go:56] duration metric: took 15.911277ms WaitForService to wait for kubelet
	I1121 15:00:34.155267  483158 kubeadm.go:587] duration metric: took 7.815205635s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:00:34.155289  483158 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:00:34.158840  483158 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:00:34.158877  483158 node_conditions.go:123] node cpu capacity is 2
	I1121 15:00:34.158905  483158 node_conditions.go:105] duration metric: took 3.588063ms to run NodePressure ...
	I1121 15:00:34.158919  483158 start.go:242] waiting for startup goroutines ...
	I1121 15:00:34.158930  483158 start.go:247] waiting for cluster config update ...
	I1121 15:00:34.158948  483158 start.go:256] writing updated cluster config ...
	I1121 15:00:34.159299  483158 ssh_runner.go:195] Run: rm -f paused
	I1121 15:00:34.164822  483158 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:00:34.176094  483158 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2mqjs" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 15:00:36.203845  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	I1121 15:00:38.058543  484973 cli_runner.go:164] Run: docker network inspect embed-certs-902161 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 15:00:38.077698  484973 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 15:00:38.082151  484973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:00:38.097170  484973 kubeadm.go:884] updating cluster {Name:embed-certs-902161 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 15:00:38.097296  484973 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:00:38.097350  484973 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:00:38.153556  484973 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:00:38.153576  484973 crio.go:433] Images already preloaded, skipping extraction
	I1121 15:00:38.153636  484973 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:00:38.198676  484973 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:00:38.198760  484973 cache_images.go:86] Images are preloaded, skipping loading
	I1121 15:00:38.198782  484973 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1121 15:00:38.198936  484973 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-902161 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 15:00:38.199064  484973 ssh_runner.go:195] Run: crio config
	I1121 15:00:38.309235  484973 cni.go:84] Creating CNI manager for ""
	I1121 15:00:38.309315  484973 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:00:38.309347  484973 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 15:00:38.309485  484973 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-902161 NodeName:embed-certs-902161 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 15:00:38.309669  484973 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-902161"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 15:00:38.309789  484973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 15:00:38.320672  484973 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 15:00:38.320798  484973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 15:00:38.331044  484973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1121 15:00:38.353096  484973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 15:00:38.370306  484973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1121 15:00:38.391480  484973 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 15:00:38.395512  484973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:00:38.410553  484973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:00:38.587367  484973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:00:38.608500  484973 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161 for IP: 192.168.76.2
	I1121 15:00:38.608576  484973 certs.go:195] generating shared ca certs ...
	I1121 15:00:38.608606  484973 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:00:38.608869  484973 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 15:00:38.608981  484973 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 15:00:38.609016  484973 certs.go:257] generating profile certs ...
	I1121 15:00:38.609147  484973 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/client.key
	I1121 15:00:38.609255  484973 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key.5d5840b9
	I1121 15:00:38.609336  484973 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.key
	I1121 15:00:38.609485  484973 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 15:00:38.609548  484973 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 15:00:38.609570  484973 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 15:00:38.609628  484973 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 15:00:38.609695  484973 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 15:00:38.609738  484973 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 15:00:38.609816  484973 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:00:38.610719  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 15:00:38.664437  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 15:00:38.714334  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 15:00:38.756929  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 15:00:38.808013  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1121 15:00:38.859334  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 15:00:38.933677  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 15:00:38.966765  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/embed-certs-902161/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 15:00:38.991635  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 15:00:39.023107  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 15:00:39.075863  484973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 15:00:39.100797  484973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 15:00:39.116371  484973 ssh_runner.go:195] Run: openssl version
	I1121 15:00:39.123631  484973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 15:00:39.134301  484973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:00:39.138769  484973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:00:39.138907  484973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:00:39.198381  484973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 15:00:39.207670  484973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 15:00:39.217299  484973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 15:00:39.222003  484973 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 15:00:39.222122  484973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 15:00:39.267076  484973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 15:00:39.277390  484973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 15:00:39.288058  484973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 15:00:39.293015  484973 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 15:00:39.293172  484973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 15:00:39.338201  484973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 15:00:39.349541  484973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 15:00:39.354231  484973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 15:00:39.398745  484973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 15:00:39.441522  484973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 15:00:39.539555  484973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 15:00:39.644511  484973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 15:00:39.747578  484973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 15:00:39.863816  484973 kubeadm.go:401] StartCluster: {Name:embed-certs-902161 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-902161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:00:39.863982  484973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 15:00:39.864098  484973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 15:00:39.975142  484973 cri.go:89] found id: "293c832724412d175c6e8ec646f8f5a753d6137d6354da90fcdc7748544c0176"
	I1121 15:00:39.975217  484973 cri.go:89] found id: "7f311233a0597f06fd619eca3d2076efd29a59099af3f91b2e7ad174953bec43"
	I1121 15:00:39.975236  484973 cri.go:89] found id: "f8022ed115d2dc50106d1d8099fe151f9220a20d78fad121bb27fe4d5d278763"
	I1121 15:00:39.975256  484973 cri.go:89] found id: "0040362a6ed65610771d229f1c844dc6fd8551a599ac1712dfac5b502944fa4e"
	I1121 15:00:39.975287  484973 cri.go:89] found id: ""
	I1121 15:00:39.975379  484973 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 15:00:39.998445  484973 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:00:39Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:00:39.998617  484973 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 15:00:40.014656  484973 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 15:00:40.014755  484973 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 15:00:40.014852  484973 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 15:00:40.027985  484973 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 15:00:40.028835  484973 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-902161" does not appear in /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:00:40.029223  484973 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-289204/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-902161" cluster setting kubeconfig missing "embed-certs-902161" context setting]
	I1121 15:00:40.030214  484973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:00:40.032248  484973 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 15:00:40.045665  484973 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1121 15:00:40.045763  484973 kubeadm.go:602] duration metric: took 30.984595ms to restartPrimaryControlPlane
	I1121 15:00:40.045789  484973 kubeadm.go:403] duration metric: took 181.984825ms to StartCluster
	I1121 15:00:40.045846  484973 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:00:40.045962  484973 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:00:40.047528  484973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:00:40.048053  484973 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:00:40.048586  484973 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 15:00:40.048683  484973 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-902161"
	I1121 15:00:40.048700  484973 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-902161"
	W1121 15:00:40.048707  484973 addons.go:248] addon storage-provisioner should already be in state true
	I1121 15:00:40.048733  484973 host.go:66] Checking if "embed-certs-902161" exists ...
	I1121 15:00:40.049600  484973 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:00:40.050028  484973 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:00:40.050164  484973 addons.go:70] Setting dashboard=true in profile "embed-certs-902161"
	I1121 15:00:40.050200  484973 addons.go:239] Setting addon dashboard=true in "embed-certs-902161"
	W1121 15:00:40.050238  484973 addons.go:248] addon dashboard should already be in state true
	I1121 15:00:40.050285  484973 host.go:66] Checking if "embed-certs-902161" exists ...
	I1121 15:00:40.050832  484973 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:00:40.059456  484973 addons.go:70] Setting default-storageclass=true in profile "embed-certs-902161"
	I1121 15:00:40.059749  484973 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-902161"
	I1121 15:00:40.059616  484973 out.go:179] * Verifying Kubernetes components...
	I1121 15:00:40.064163  484973 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:00:40.067623  484973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:00:40.110623  484973 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 15:00:40.115435  484973 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 15:00:40.122830  484973 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:00:40.122856  484973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 15:00:40.122937  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:40.123100  484973 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1121 15:00:40.126581  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 15:00:40.126610  484973 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 15:00:40.126699  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:40.147762  484973 addons.go:239] Setting addon default-storageclass=true in "embed-certs-902161"
	W1121 15:00:40.147794  484973 addons.go:248] addon default-storageclass should already be in state true
	I1121 15:00:40.147821  484973 host.go:66] Checking if "embed-certs-902161" exists ...
	I1121 15:00:40.148289  484973 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:00:40.170738  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:40.198761  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:40.218417  484973 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 15:00:40.218441  484973 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 15:00:40.218515  484973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:00:40.245286  484973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:00:40.528964  484973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1121 15:00:38.682847  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:40.685751  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:42.687604  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	I1121 15:00:40.594855  484973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 15:00:40.611173  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 15:00:40.611200  484973 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 15:00:40.645401  484973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:00:40.791048  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 15:00:40.791074  484973 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 15:00:40.861670  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 15:00:40.861697  484973 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 15:00:40.965180  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 15:00:40.965204  484973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 15:00:41.021800  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 15:00:41.021829  484973 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 15:00:41.043585  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 15:00:41.043609  484973 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 15:00:41.081321  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 15:00:41.081347  484973 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 15:00:41.125316  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 15:00:41.125342  484973 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 15:00:41.151603  484973 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 15:00:41.151631  484973 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 15:00:41.197651  484973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1121 15:00:45.295303  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:47.681044  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	I1121 15:00:50.006966  484973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.412073858s)
	I1121 15:00:50.007022  484973 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.361583532s)
	I1121 15:00:50.007188  484973 node_ready.go:35] waiting up to 6m0s for node "embed-certs-902161" to be "Ready" ...
	I1121 15:00:50.007549  484973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.478546805s)
	I1121 15:00:50.063340  484973 node_ready.go:49] node "embed-certs-902161" is "Ready"
	I1121 15:00:50.063424  484973 node_ready.go:38] duration metric: took 56.206848ms for node "embed-certs-902161" to be "Ready" ...
	I1121 15:00:50.063543  484973 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:00:50.063641  484973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:00:50.189829  484973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.992132215s)
	I1121 15:00:50.190058  484973 api_server.go:72] duration metric: took 10.141918792s to wait for apiserver process to appear ...
	I1121 15:00:50.190113  484973 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:00:50.190163  484973 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:00:50.192923  484973 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-902161 addons enable metrics-server
	
	I1121 15:00:50.196063  484973 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1121 15:00:50.199077  484973 addons.go:530] duration metric: took 10.150477121s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1121 15:00:50.220353  484973 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 15:00:50.221705  484973 api_server.go:141] control plane version: v1.34.1
	I1121 15:00:50.221729  484973 api_server.go:131] duration metric: took 31.592684ms to wait for apiserver health ...
	I1121 15:00:50.221739  484973 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:00:50.235520  484973 system_pods.go:59] 8 kube-system pods found
	I1121 15:00:50.235616  484973 system_pods.go:61] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:50.235642  484973 system_pods.go:61] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:00:50.235675  484973 system_pods.go:61] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:50.235703  484973 system_pods.go:61] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 15:00:50.235727  484973 system_pods.go:61] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 15:00:50.235760  484973 system_pods.go:61] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:50.235789  484973 system_pods.go:61] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:00:50.235806  484973 system_pods.go:61] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Running
	I1121 15:00:50.235842  484973 system_pods.go:74] duration metric: took 14.0801ms to wait for pod list to return data ...
	I1121 15:00:50.235867  484973 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:00:50.245066  484973 default_sa.go:45] found service account: "default"
	I1121 15:00:50.245143  484973 default_sa.go:55] duration metric: took 9.2567ms for default service account to be created ...
	I1121 15:00:50.245167  484973 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 15:00:50.258793  484973 system_pods.go:86] 8 kube-system pods found
	I1121 15:00:50.258882  484973 system_pods.go:89] "coredns-66bc5c9577-gttll" [3a4724fc-20fc-4b84-86b5-c3e0255a8563] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:00:50.258906  484973 system_pods.go:89] "etcd-embed-certs-902161" [309c07f1-280e-4d9a-843b-35f40a324377] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:00:50.258944  484973 system_pods.go:89] "kindnet-9zs98" [4f7aaa72-4c04-42c6-b6c3-363eef49e44f] Running
	I1121 15:00:50.258969  484973 system_pods.go:89] "kube-apiserver-embed-certs-902161" [8c20ac9a-c354-4006-9665-84034e82b5d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 15:00:50.258990  484973 system_pods.go:89] "kube-controller-manager-embed-certs-902161" [d45d9c17-2a9b-461c-92a3-41bd18aa506b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 15:00:50.259009  484973 system_pods.go:89] "kube-proxy-wkbb9" [a59095a4-c10e-4739-809b-fa5606b9b835] Running
	I1121 15:00:50.259041  484973 system_pods.go:89] "kube-scheduler-embed-certs-902161" [f5174845-1837-44ad-9a71-4b137e00d752] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:00:50.259065  484973 system_pods.go:89] "storage-provisioner" [90f25b5f-e180-47de-830a-c9fd43709936] Running
	I1121 15:00:50.259085  484973 system_pods.go:126] duration metric: took 13.900587ms to wait for k8s-apps to be running ...
	I1121 15:00:50.259118  484973 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 15:00:50.259208  484973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:00:50.346755  484973 system_svc.go:56] duration metric: took 87.628568ms WaitForService to wait for kubelet
	I1121 15:00:50.346837  484973 kubeadm.go:587] duration metric: took 10.298698255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:00:50.346871  484973 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:00:50.351286  484973 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:00:50.351367  484973 node_conditions.go:123] node cpu capacity is 2
	I1121 15:00:50.351396  484973 node_conditions.go:105] duration metric: took 4.506957ms to run NodePressure ...
	I1121 15:00:50.351422  484973 start.go:242] waiting for startup goroutines ...
	I1121 15:00:50.351454  484973 start.go:247] waiting for cluster config update ...
	I1121 15:00:50.351483  484973 start.go:256] writing updated cluster config ...
	I1121 15:00:50.351825  484973 ssh_runner.go:195] Run: rm -f paused
	I1121 15:00:50.356663  484973 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:00:50.363303  484973 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gttll" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 15:00:49.682183  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:52.182446  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:52.368672  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:00:54.869535  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:00:54.682057  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:57.181621  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:00:57.372469  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:00:59.870659  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:00:59.683571  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:01:01.695669  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:01:01.881980  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:04.369814  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:04.181512  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:01:06.182186  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:01:06.869541  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:08.869871  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:08.681773  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	W1121 15:01:10.682804  483158 pod_ready.go:104] pod "coredns-66bc5c9577-2mqjs" is not "Ready", error: <nil>
	I1121 15:01:11.681627  483158 pod_ready.go:94] pod "coredns-66bc5c9577-2mqjs" is "Ready"
	I1121 15:01:11.681700  483158 pod_ready.go:86] duration metric: took 37.505576347s for pod "coredns-66bc5c9577-2mqjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:11.685448  483158 pod_ready.go:83] waiting for pod "etcd-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:11.690530  483158 pod_ready.go:94] pod "etcd-no-preload-844780" is "Ready"
	I1121 15:01:11.690560  483158 pod_ready.go:86] duration metric: took 5.081562ms for pod "etcd-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:11.693254  483158 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:11.698788  483158 pod_ready.go:94] pod "kube-apiserver-no-preload-844780" is "Ready"
	I1121 15:01:11.698816  483158 pod_ready.go:86] duration metric: took 5.538313ms for pod "kube-apiserver-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:11.701176  483158 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:11.879777  483158 pod_ready.go:94] pod "kube-controller-manager-no-preload-844780" is "Ready"
	I1121 15:01:11.879808  483158 pod_ready.go:86] duration metric: took 178.607383ms for pod "kube-controller-manager-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:12.079865  483158 pod_ready.go:83] waiting for pod "kube-proxy-2zwvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:12.479525  483158 pod_ready.go:94] pod "kube-proxy-2zwvg" is "Ready"
	I1121 15:01:12.479582  483158 pod_ready.go:86] duration metric: took 399.68353ms for pod "kube-proxy-2zwvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:12.680190  483158 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:13.079558  483158 pod_ready.go:94] pod "kube-scheduler-no-preload-844780" is "Ready"
	I1121 15:01:13.079586  483158 pod_ready.go:86] duration metric: took 399.362467ms for pod "kube-scheduler-no-preload-844780" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:01:13.079639  483158 pod_ready.go:40] duration metric: took 38.914749518s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:01:13.147429  483158 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 15:01:13.150317  483158 out.go:179] * Done! kubectl is now configured to use "no-preload-844780" cluster and "default" namespace by default
	W1121 15:01:11.370147  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:13.868693  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:15.869074  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:18.369824  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:20.874083  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:23.369678  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	W1121 15:01:25.373699  484973 pod_ready.go:104] pod "coredns-66bc5c9577-gttll" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.885929466Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c9d7c128-3ef1-4f83-b087-7f45d928b862 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.886921445Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=22b9e420-5f0d-4290-a0df-56ae2bf9b420 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.887068991Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.895890675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.896099046Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/54d1cac443932b1b9428831edcdee248bc1945dc635b93051921db5415d0bea8/merged/etc/passwd: no such file or directory"
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.896129504Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/54d1cac443932b1b9428831edcdee248bc1945dc635b93051921db5415d0bea8/merged/etc/group: no such file or directory"
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.896415834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.917427986Z" level=info msg="Created container 8e46ba8bffec92e3a6028389dbfdba8e09d0cd4bba5e18e2bdb7c932bdc655ad: kube-system/storage-provisioner/storage-provisioner" id=22b9e420-5f0d-4290-a0df-56ae2bf9b420 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.918546801Z" level=info msg="Starting container: 8e46ba8bffec92e3a6028389dbfdba8e09d0cd4bba5e18e2bdb7c932bdc655ad" id=595d57c7-2739-4c4c-899b-9e37d478e849 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:01:04 no-preload-844780 crio[653]: time="2025-11-21T15:01:04.920737342Z" level=info msg="Started container" PID=1631 containerID=8e46ba8bffec92e3a6028389dbfdba8e09d0cd4bba5e18e2bdb7c932bdc655ad description=kube-system/storage-provisioner/storage-provisioner id=595d57c7-2739-4c4c-899b-9e37d478e849 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e3e7b601f74d3c025a16faa58b55735d69b321893d1876f06381a31ecf7a705
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.804596294Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.808793266Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.808844943Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.80886946Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.813181075Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.813215874Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.813239251Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.816419185Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.816452884Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.816473496Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.819788964Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.819823484Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.819856469Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.823911245Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:13 no-preload-844780 crio[653]: time="2025-11-21T15:01:13.823948225Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8e46ba8bffec9       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           25 seconds ago       Running             storage-provisioner         2                   3e3e7b601f74d       storage-provisioner                          kube-system
	750148acce1d8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   331b6a592418b       dashboard-metrics-scraper-6ffb444bf9-kg884   kubernetes-dashboard
	803b6f25723ab       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   788ce08e5213d       kubernetes-dashboard-855c9754f9-6gjq5        kubernetes-dashboard
	91214642a2e4d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   8370aa5d83668       coredns-66bc5c9577-2mqjs                     kube-system
	b8a665b3a7370       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   2a984946bb90c       busybox                                      default
	092aa66ca872f       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           57 seconds ago       Exited              storage-provisioner         1                   3e3e7b601f74d       storage-provisioner                          kube-system
	25bca3969133f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   e892fb499a45c       kube-proxy-2zwvg                             kube-system
	ae6f69c7f5749       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   fc5b4056b8ad3       kindnet-whwj8                                kube-system
	f28d6ebe45c65       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   c719f7bb9d1d7       etcd-no-preload-844780                       kube-system
	4de5050f30939       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   bb25d0f582fee       kube-apiserver-no-preload-844780             kube-system
	8ef8bdf61c8fb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   080de2f635bd2       kube-controller-manager-no-preload-844780    kube-system
	b93ce5f43f1f5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   6fec832e5e215       kube-scheduler-no-preload-844780             kube-system
	
	
	==> coredns [91214642a2e4d239b7aa08e3a3850f1413ec65232ed5fed18be3647fe444771c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45889 - 23168 "HINFO IN 6519282933529723802.8562960686694559870. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032625788s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-844780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-844780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=no-preload-844780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_59_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:59:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-844780
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 15:01:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 15:01:13 +0000   Fri, 21 Nov 2025 14:59:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 15:01:13 +0000   Fri, 21 Nov 2025 14:59:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 15:01:13 +0000   Fri, 21 Nov 2025 14:59:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 15:01:13 +0000   Fri, 21 Nov 2025 14:59:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-844780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                0ed5c352-e309-429b-9135-9dfa2d81a7b2
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-2mqjs                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 etcd-no-preload-844780                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-whwj8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-no-preload-844780              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-844780     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-2zwvg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-no-preload-844780              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kg884    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6gjq5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 114s               kube-proxy       
	  Normal   Starting                 56s                kube-proxy       
	  Normal   Starting                 2m8s               kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m                 kubelet          Node no-preload-844780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m                 kubelet          Node no-preload-844780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m                 kubelet          Node no-preload-844780 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           116s               node-controller  Node no-preload-844780 event: Registered Node no-preload-844780 in Controller
	  Normal   NodeReady                101s               kubelet          Node no-preload-844780 status is now: NodeReady
	  Normal   Starting                 65s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node no-preload-844780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node no-preload-844780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)  kubelet          Node no-preload-844780 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node no-preload-844780 event: Registered Node no-preload-844780 in Controller
	
	
	==> dmesg <==
	[Nov21 14:36] overlayfs: idmapped layers are currently not supported
	[Nov21 14:37] overlayfs: idmapped layers are currently not supported
	[Nov21 14:39] overlayfs: idmapped layers are currently not supported
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	[Nov21 14:58] overlayfs: idmapped layers are currently not supported
	[Nov21 14:59] overlayfs: idmapped layers are currently not supported
	[Nov21 15:00] overlayfs: idmapped layers are currently not supported
	[ +13.392744] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f28d6ebe45c6536743a007a0f7945e8b16f3fabfc7bfef52a4f2c46fd0f649b8] <==
	{"level":"warn","ts":"2025-11-21T15:00:29.252700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.261529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.291405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.322411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.353359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.392067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.428360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.447601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.496709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.563139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.598924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.623104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.663011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.677780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.728560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.759618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.783400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.864834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.906868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.971989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:29.997283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:30.032727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:30.080365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:30.118990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:30.256911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51068","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:01:30 up  2:44,  0 user,  load average: 3.61, 3.31, 2.71
	Linux no-preload-844780 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ae6f69c7f5749ac30e241e7543f9fb184fc24b729c4951ad03bbd80c5b5f834e] <==
	I1121 15:00:33.548738       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 15:00:33.604975       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 15:00:33.605160       1 main.go:148] setting mtu 1500 for CNI 
	I1121 15:00:33.605175       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 15:00:33.605187       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T15:00:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 15:00:33.805442       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 15:00:33.812979       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 15:00:33.813003       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 15:00:33.813540       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 15:01:03.804245       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 15:01:03.806416       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 15:01:03.813981       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 15:01:03.813981       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1121 15:01:05.213607       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 15:01:05.213639       1 metrics.go:72] Registering metrics
	I1121 15:01:05.213714       1 controller.go:711] "Syncing nftables rules"
	I1121 15:01:13.804290       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 15:01:13.804340       1 main.go:301] handling current node
	I1121 15:01:23.812205       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 15:01:23.812241       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4de5050f30939399107806067ae7c00df56197c6f825d9cbdd2926418c8dfb1c] <==
	I1121 15:00:32.306020       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 15:00:32.306160       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1121 15:00:32.316917       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1121 15:00:32.316993       1 aggregator.go:171] initial CRD sync complete...
	I1121 15:00:32.317003       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 15:00:32.317022       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 15:00:32.317029       1 cache.go:39] Caches are synced for autoregister controller
	I1121 15:00:32.317284       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 15:00:32.317298       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 15:00:32.317433       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1121 15:00:32.317485       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 15:00:32.327516       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1121 15:00:32.334366       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 15:00:32.351206       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 15:00:32.644785       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 15:00:32.740986       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 15:00:33.791728       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 15:00:33.931588       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 15:00:33.986773       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 15:00:34.006032       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 15:00:34.090864       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.232.191"}
	I1121 15:00:34.107434       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.233.26"}
	I1121 15:00:35.810726       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 15:00:36.060749       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 15:00:36.112057       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8ef8bdf61c8fb8bd85b829ec545c408d5a4aedca375c5a8696c35c65e8c4bb35] <==
	I1121 15:00:35.657121       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 15:00:35.657157       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 15:00:35.657383       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 15:00:35.657427       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 15:00:35.657532       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 15:00:35.657542       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 15:00:35.660007       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 15:00:35.660081       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:00:35.661329       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 15:00:35.662153       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 15:00:35.668084       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 15:00:35.668235       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 15:00:35.668905       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 15:00:35.670763       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 15:00:35.673419       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-844780"
	I1121 15:00:35.673997       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1121 15:00:35.673519       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 15:00:35.671609       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 15:00:35.677373       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 15:00:35.690715       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:00:35.692691       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:00:35.692718       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 15:00:35.692726       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 15:00:35.703722       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 15:00:35.714102       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [25bca3969133f7a61c59e66d79a502421490cba2b74f6e2402d9413554a4e50c] <==
	I1121 15:00:34.022353       1 server_linux.go:53] "Using iptables proxy"
	I1121 15:00:34.222293       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 15:00:34.324636       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 15:00:34.324677       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 15:00:34.324762       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 15:00:34.379086       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 15:00:34.379155       1 server_linux.go:132] "Using iptables Proxier"
	I1121 15:00:34.383872       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 15:00:34.384237       1 server.go:527] "Version info" version="v1.34.1"
	I1121 15:00:34.384264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:00:34.385723       1 config.go:200] "Starting service config controller"
	I1121 15:00:34.385748       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 15:00:34.385767       1 config.go:106] "Starting endpoint slice config controller"
	I1121 15:00:34.385771       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 15:00:34.385781       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 15:00:34.385786       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 15:00:34.386903       1 config.go:309] "Starting node config controller"
	I1121 15:00:34.386923       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 15:00:34.386931       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 15:00:34.486454       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 15:00:34.486494       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 15:00:34.486534       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b93ce5f43f1f5d952f34aacec278e0b6e010e0f24f89fe665a79bb363f7b369e] <==
	I1121 15:00:30.256250       1 serving.go:386] Generated self-signed cert in-memory
	I1121 15:00:34.226497       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 15:00:34.226537       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:00:34.241094       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 15:00:34.241390       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 15:00:34.241472       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 15:00:34.241534       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 15:00:34.243587       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:00:34.243676       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:00:34.243736       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:00:34.243777       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:00:34.341595       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 15:00:34.343976       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:00:34.344085       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 15:00:36 no-preload-844780 kubelet[773]: I1121 15:00:36.360883     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjm8t\" (UniqueName: \"kubernetes.io/projected/9bd7c0e3-0f0c-48da-b343-e3b558c82dcc-kube-api-access-mjm8t\") pod \"kubernetes-dashboard-855c9754f9-6gjq5\" (UID: \"9bd7c0e3-0f0c-48da-b343-e3b558c82dcc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6gjq5"
	Nov 21 15:00:36 no-preload-844780 kubelet[773]: I1121 15:00:36.360905     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9fc3bebd-775a-4d09-947e-e26400dfc4e3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-kg884\" (UID: \"9fc3bebd-775a-4d09-947e-e26400dfc4e3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884"
	Nov 21 15:00:36 no-preload-844780 kubelet[773]: W1121 15:00:36.614665     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8e592d0d77ca159068ac44079622181a251dfc417c6aecf719cb09d63a714460/crio-331b6a592418b3601235c1776b40b90a0912c9e099b037cd7b6c2c1b35155bcd WatchSource:0}: Error finding container 331b6a592418b3601235c1776b40b90a0912c9e099b037cd7b6c2c1b35155bcd: Status 404 returned error can't find the container with id 331b6a592418b3601235c1776b40b90a0912c9e099b037cd7b6c2c1b35155bcd
	Nov 21 15:00:41 no-preload-844780 kubelet[773]: I1121 15:00:41.542976     773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 21 15:00:43 no-preload-844780 kubelet[773]: I1121 15:00:43.791398     773 scope.go:117] "RemoveContainer" containerID="bad1570406d2df1346ce65231982805f4da0a372fc55ac0b494f8a5f8320e6da"
	Nov 21 15:00:44 no-preload-844780 kubelet[773]: I1121 15:00:44.801155     773 scope.go:117] "RemoveContainer" containerID="bad1570406d2df1346ce65231982805f4da0a372fc55ac0b494f8a5f8320e6da"
	Nov 21 15:00:44 no-preload-844780 kubelet[773]: I1121 15:00:44.801447     773 scope.go:117] "RemoveContainer" containerID="4d8275dc9c9fbe9b9ef555eaaaded5f35d300457439cdea3f7c58ec32a2b3dd5"
	Nov 21 15:00:44 no-preload-844780 kubelet[773]: E1121 15:00:44.801587     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kg884_kubernetes-dashboard(9fc3bebd-775a-4d09-947e-e26400dfc4e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884" podUID="9fc3bebd-775a-4d09-947e-e26400dfc4e3"
	Nov 21 15:00:45 no-preload-844780 kubelet[773]: I1121 15:00:45.823974     773 scope.go:117] "RemoveContainer" containerID="4d8275dc9c9fbe9b9ef555eaaaded5f35d300457439cdea3f7c58ec32a2b3dd5"
	Nov 21 15:00:45 no-preload-844780 kubelet[773]: E1121 15:00:45.824335     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kg884_kubernetes-dashboard(9fc3bebd-775a-4d09-947e-e26400dfc4e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884" podUID="9fc3bebd-775a-4d09-947e-e26400dfc4e3"
	Nov 21 15:00:46 no-preload-844780 kubelet[773]: I1121 15:00:46.823722     773 scope.go:117] "RemoveContainer" containerID="4d8275dc9c9fbe9b9ef555eaaaded5f35d300457439cdea3f7c58ec32a2b3dd5"
	Nov 21 15:00:46 no-preload-844780 kubelet[773]: E1121 15:00:46.823891     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kg884_kubernetes-dashboard(9fc3bebd-775a-4d09-947e-e26400dfc4e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884" podUID="9fc3bebd-775a-4d09-947e-e26400dfc4e3"
	Nov 21 15:01:01 no-preload-844780 kubelet[773]: I1121 15:01:01.664731     773 scope.go:117] "RemoveContainer" containerID="4d8275dc9c9fbe9b9ef555eaaaded5f35d300457439cdea3f7c58ec32a2b3dd5"
	Nov 21 15:01:01 no-preload-844780 kubelet[773]: I1121 15:01:01.873365     773 scope.go:117] "RemoveContainer" containerID="750148acce1d8a59f9fb4ba7b5e591406908b5ebfce4337f15a074d55153c1ee"
	Nov 21 15:01:01 no-preload-844780 kubelet[773]: I1121 15:01:01.873766     773 scope.go:117] "RemoveContainer" containerID="4d8275dc9c9fbe9b9ef555eaaaded5f35d300457439cdea3f7c58ec32a2b3dd5"
	Nov 21 15:01:01 no-preload-844780 kubelet[773]: E1121 15:01:01.877693     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kg884_kubernetes-dashboard(9fc3bebd-775a-4d09-947e-e26400dfc4e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884" podUID="9fc3bebd-775a-4d09-947e-e26400dfc4e3"
	Nov 21 15:01:01 no-preload-844780 kubelet[773]: I1121 15:01:01.912041     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6gjq5" podStartSLOduration=10.748408544 podStartE2EDuration="25.912015385s" podCreationTimestamp="2025-11-21 15:00:36 +0000 UTC" firstStartedPulling="2025-11-21 15:00:36.654568983 +0000 UTC m=+11.237252027" lastFinishedPulling="2025-11-21 15:00:51.818175833 +0000 UTC m=+26.400858868" observedRunningTime="2025-11-21 15:00:52.863503643 +0000 UTC m=+27.446186695" watchObservedRunningTime="2025-11-21 15:01:01.912015385 +0000 UTC m=+36.494698421"
	Nov 21 15:01:04 no-preload-844780 kubelet[773]: I1121 15:01:04.883950     773 scope.go:117] "RemoveContainer" containerID="092aa66ca872f777a9f7bb1165f836461731b4ead738841303293cb5d0367e17"
	Nov 21 15:01:06 no-preload-844780 kubelet[773]: I1121 15:01:06.565278     773 scope.go:117] "RemoveContainer" containerID="750148acce1d8a59f9fb4ba7b5e591406908b5ebfce4337f15a074d55153c1ee"
	Nov 21 15:01:06 no-preload-844780 kubelet[773]: E1121 15:01:06.565467     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kg884_kubernetes-dashboard(9fc3bebd-775a-4d09-947e-e26400dfc4e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884" podUID="9fc3bebd-775a-4d09-947e-e26400dfc4e3"
	Nov 21 15:01:19 no-preload-844780 kubelet[773]: I1121 15:01:19.663564     773 scope.go:117] "RemoveContainer" containerID="750148acce1d8a59f9fb4ba7b5e591406908b5ebfce4337f15a074d55153c1ee"
	Nov 21 15:01:19 no-preload-844780 kubelet[773]: E1121 15:01:19.663801     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kg884_kubernetes-dashboard(9fc3bebd-775a-4d09-947e-e26400dfc4e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kg884" podUID="9fc3bebd-775a-4d09-947e-e26400dfc4e3"
	Nov 21 15:01:25 no-preload-844780 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 15:01:25 no-preload-844780 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 15:01:25 no-preload-844780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [803b6f25723ab454823682f0924b288c077286398a7e98198fd3b7bd5f286fc6] <==
	2025/11/21 15:00:51 Using namespace: kubernetes-dashboard
	2025/11/21 15:00:51 Using in-cluster config to connect to apiserver
	2025/11/21 15:00:51 Using secret token for csrf signing
	2025/11/21 15:00:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 15:00:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 15:00:51 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 15:00:51 Generating JWE encryption key
	2025/11/21 15:00:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 15:00:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 15:00:52 Initializing JWE encryption key from synchronized object
	2025/11/21 15:00:52 Creating in-cluster Sidecar client
	2025/11/21 15:00:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 15:00:52 Serving insecurely on HTTP port: 9090
	2025/11/21 15:01:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 15:00:51 Starting overwatch
	
	
	==> storage-provisioner [092aa66ca872f777a9f7bb1165f836461731b4ead738841303293cb5d0367e17] <==
	I1121 15:00:33.934087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 15:01:03.940801       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8e46ba8bffec92e3a6028389dbfdba8e09d0cd4bba5e18e2bdb7c932bdc655ad] <==
	I1121 15:01:04.935848       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 15:01:04.952578       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 15:01:04.952629       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 15:01:04.956085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:08.411267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:12.671270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:16.269222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:19.323522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:22.346215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:22.353590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:01:22.353778       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 15:01:22.354360       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dbf9e551-e0be-48eb-aa2e-8e3a20e98a71", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-844780_40f1a25f-ef85-4e49-9dd6-f366efa78e6c became leader
	I1121 15:01:22.354648       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-844780_40f1a25f-ef85-4e49-9dd6-f366efa78e6c!
	W1121 15:01:22.359683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:22.370926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:01:22.455481       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-844780_40f1a25f-ef85-4e49-9dd6-f366efa78e6c!
	W1121 15:01:24.373364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:24.384673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:26.388342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:26.393067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:28.396829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:28.402380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:30.405808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:30.419292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-844780 -n no-preload-844780
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-844780 -n no-preload-844780: exit status 2 (379.755253ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-844780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-902161 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-902161 --alsologtostderr -v=1: exit status 80 (1.976146781s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-902161 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 15:01:42.785520  490314 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:01:42.785860  490314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:01:42.785874  490314 out.go:374] Setting ErrFile to fd 2...
	I1121 15:01:42.785896  490314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:01:42.786223  490314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:01:42.786598  490314 out.go:368] Setting JSON to false
	I1121 15:01:42.786629  490314 mustload.go:66] Loading cluster: embed-certs-902161
	I1121 15:01:42.787204  490314 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:01:42.787786  490314 cli_runner.go:164] Run: docker container inspect embed-certs-902161 --format={{.State.Status}}
	I1121 15:01:42.805329  490314 host.go:66] Checking if "embed-certs-902161" exists ...
	I1121 15:01:42.805668  490314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:01:42.862151  490314 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-21 15:01:42.851949752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:01:42.862814  490314 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-902161 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1121 15:01:42.866502  490314 out.go:179] * Pausing node embed-certs-902161 ... 
	I1121 15:01:42.869436  490314 host.go:66] Checking if "embed-certs-902161" exists ...
	I1121 15:01:42.869797  490314 ssh_runner.go:195] Run: systemctl --version
	I1121 15:01:42.869854  490314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-902161
	I1121 15:01:42.887156  490314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/embed-certs-902161/id_rsa Username:docker}
	I1121 15:01:42.991895  490314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:01:43.021953  490314 pause.go:52] kubelet running: true
	I1121 15:01:43.022051  490314 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 15:01:43.267761  490314 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 15:01:43.267862  490314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 15:01:43.340032  490314 cri.go:89] found id: "dbcc098f64f64a45ff8ebb087821fbc5ba58ac0688bfe6746b560d5f89603466"
	I1121 15:01:43.340058  490314 cri.go:89] found id: "20114e40488feaa7304dc637c72903ffa7244761bc47c1d43f62ba4230d0cac2"
	I1121 15:01:43.340063  490314 cri.go:89] found id: "71685dc5dca648adc7e487e76ac64efb2d3c2f7323ace890db6b8e8cce320c72"
	I1121 15:01:43.340068  490314 cri.go:89] found id: "fb6a440f0996534be16e2f3149870c14ebb2b9d6295a9f4a022fc7d662c2cb56"
	I1121 15:01:43.340071  490314 cri.go:89] found id: "f3a6175482209ccf02f79087fa28bee4117be57f12c7cf3f8d5ec9e1a96bc72a"
	I1121 15:01:43.340074  490314 cri.go:89] found id: "293c832724412d175c6e8ec646f8f5a753d6137d6354da90fcdc7748544c0176"
	I1121 15:01:43.340078  490314 cri.go:89] found id: "7f311233a0597f06fd619eca3d2076efd29a59099af3f91b2e7ad174953bec43"
	I1121 15:01:43.340081  490314 cri.go:89] found id: "f8022ed115d2dc50106d1d8099fe151f9220a20d78fad121bb27fe4d5d278763"
	I1121 15:01:43.340117  490314 cri.go:89] found id: "0040362a6ed65610771d229f1c844dc6fd8551a599ac1712dfac5b502944fa4e"
	I1121 15:01:43.340137  490314 cri.go:89] found id: "1b46439195479c002a2b7a0455d409a8d6e3b2b1b3864f183f8851bfa47b9f16"
	I1121 15:01:43.340141  490314 cri.go:89] found id: "e0241230681c85d418bfb39b23677a3797d0bbaea46d71be8cd3986fe0435074"
	I1121 15:01:43.340144  490314 cri.go:89] found id: ""
	I1121 15:01:43.340221  490314 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 15:01:43.351441  490314 retry.go:31] will retry after 293.755612ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:01:43Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:01:43.646031  490314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:01:43.659527  490314 pause.go:52] kubelet running: false
	I1121 15:01:43.659615  490314 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 15:01:43.826299  490314 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 15:01:43.826410  490314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 15:01:43.899433  490314 cri.go:89] found id: "dbcc098f64f64a45ff8ebb087821fbc5ba58ac0688bfe6746b560d5f89603466"
	I1121 15:01:43.899454  490314 cri.go:89] found id: "20114e40488feaa7304dc637c72903ffa7244761bc47c1d43f62ba4230d0cac2"
	I1121 15:01:43.899459  490314 cri.go:89] found id: "71685dc5dca648adc7e487e76ac64efb2d3c2f7323ace890db6b8e8cce320c72"
	I1121 15:01:43.899463  490314 cri.go:89] found id: "fb6a440f0996534be16e2f3149870c14ebb2b9d6295a9f4a022fc7d662c2cb56"
	I1121 15:01:43.899466  490314 cri.go:89] found id: "f3a6175482209ccf02f79087fa28bee4117be57f12c7cf3f8d5ec9e1a96bc72a"
	I1121 15:01:43.899470  490314 cri.go:89] found id: "293c832724412d175c6e8ec646f8f5a753d6137d6354da90fcdc7748544c0176"
	I1121 15:01:43.899475  490314 cri.go:89] found id: "7f311233a0597f06fd619eca3d2076efd29a59099af3f91b2e7ad174953bec43"
	I1121 15:01:43.899478  490314 cri.go:89] found id: "f8022ed115d2dc50106d1d8099fe151f9220a20d78fad121bb27fe4d5d278763"
	I1121 15:01:43.899481  490314 cri.go:89] found id: "0040362a6ed65610771d229f1c844dc6fd8551a599ac1712dfac5b502944fa4e"
	I1121 15:01:43.899491  490314 cri.go:89] found id: "1b46439195479c002a2b7a0455d409a8d6e3b2b1b3864f183f8851bfa47b9f16"
	I1121 15:01:43.899497  490314 cri.go:89] found id: "e0241230681c85d418bfb39b23677a3797d0bbaea46d71be8cd3986fe0435074"
	I1121 15:01:43.899500  490314 cri.go:89] found id: ""
	I1121 15:01:43.899556  490314 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 15:01:43.911188  490314 retry.go:31] will retry after 450.921825ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:01:43Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:01:44.362455  490314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:01:44.379326  490314 pause.go:52] kubelet running: false
	I1121 15:01:44.379402  490314 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 15:01:44.584567  490314 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 15:01:44.584667  490314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 15:01:44.673271  490314 cri.go:89] found id: "dbcc098f64f64a45ff8ebb087821fbc5ba58ac0688bfe6746b560d5f89603466"
	I1121 15:01:44.673293  490314 cri.go:89] found id: "20114e40488feaa7304dc637c72903ffa7244761bc47c1d43f62ba4230d0cac2"
	I1121 15:01:44.673299  490314 cri.go:89] found id: "71685dc5dca648adc7e487e76ac64efb2d3c2f7323ace890db6b8e8cce320c72"
	I1121 15:01:44.673303  490314 cri.go:89] found id: "fb6a440f0996534be16e2f3149870c14ebb2b9d6295a9f4a022fc7d662c2cb56"
	I1121 15:01:44.673306  490314 cri.go:89] found id: "f3a6175482209ccf02f79087fa28bee4117be57f12c7cf3f8d5ec9e1a96bc72a"
	I1121 15:01:44.673309  490314 cri.go:89] found id: "293c832724412d175c6e8ec646f8f5a753d6137d6354da90fcdc7748544c0176"
	I1121 15:01:44.673313  490314 cri.go:89] found id: "7f311233a0597f06fd619eca3d2076efd29a59099af3f91b2e7ad174953bec43"
	I1121 15:01:44.673316  490314 cri.go:89] found id: "f8022ed115d2dc50106d1d8099fe151f9220a20d78fad121bb27fe4d5d278763"
	I1121 15:01:44.673319  490314 cri.go:89] found id: "0040362a6ed65610771d229f1c844dc6fd8551a599ac1712dfac5b502944fa4e"
	I1121 15:01:44.673329  490314 cri.go:89] found id: "1b46439195479c002a2b7a0455d409a8d6e3b2b1b3864f183f8851bfa47b9f16"
	I1121 15:01:44.673332  490314 cri.go:89] found id: "e0241230681c85d418bfb39b23677a3797d0bbaea46d71be8cd3986fe0435074"
	I1121 15:01:44.673335  490314 cri.go:89] found id: ""
	I1121 15:01:44.673392  490314 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 15:01:44.693234  490314 out.go:203] 
	W1121 15:01:44.696164  490314 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:01:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:01:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 15:01:44.696187  490314 out.go:285] * 
	* 
	W1121 15:01:44.703681  490314 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 15:01:44.706712  490314 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-902161 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-902161
helpers_test.go:243: (dbg) docker inspect embed-certs-902161:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46",
	        "Created": "2025-11-21T14:58:43.65271767Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 485103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T15:00:31.015218535Z",
	            "FinishedAt": "2025-11-21T15:00:29.796855533Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/hostname",
	        "HostsPath": "/var/lib/docker/containers/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/hosts",
	        "LogPath": "/var/lib/docker/containers/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46-json.log",
	        "Name": "/embed-certs-902161",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-902161:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-902161",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46",
	                "LowerDir": "/var/lib/docker/overlay2/b655fbbd9ad31e0c4853ba9d67f87de572b3d8773fd103fccc5932eb2e963585-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b655fbbd9ad31e0c4853ba9d67f87de572b3d8773fd103fccc5932eb2e963585/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b655fbbd9ad31e0c4853ba9d67f87de572b3d8773fd103fccc5932eb2e963585/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b655fbbd9ad31e0c4853ba9d67f87de572b3d8773fd103fccc5932eb2e963585/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-902161",
	                "Source": "/var/lib/docker/volumes/embed-certs-902161/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-902161",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-902161",
	                "name.minikube.sigs.k8s.io": "embed-certs-902161",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1e1a6c8aeb120ac15c57bc53872bfc1872ae2afd91257b745f151916436bfd8",
	            "SandboxKey": "/var/run/docker/netns/b1e1a6c8aeb1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-902161": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:c1:88:2b:00:77",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "353a1d7977a8c37987b78fe82de2605299d1e2de5a9662311c657d4b51a465bb",
	                    "EndpointID": "7edae16e2f1e2f1beb1ee26838e9d7fd843694252c0a71c4218e953ee0609e20",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-902161",
	                        "38e73448071a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-902161 -n embed-certs-902161
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-902161 -n embed-certs-902161: exit status 2 (627.746424ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-902161 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-902161 logs -n 25: (1.655858391s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ image   │ old-k8s-version-357479 image list --format=json                                                                                                                                                                                               │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ pause   │ -p old-k8s-version-357479 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │                     │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p cert-expiration-304879                                                                                                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 15:00 UTC │
	│ delete  │ -p disable-driver-mounts-984933                                                                                                                                                                                                               │ disable-driver-mounts-984933 │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-844780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ stop    │ -p no-preload-844780 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ addons  │ enable metrics-server -p embed-certs-902161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-844780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ stop    │ -p embed-certs-902161 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-902161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ image   │ no-preload-844780 image list --format=json                                                                                                                                                                                                    │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p no-preload-844780 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ image   │ embed-certs-902161 image list --format=json                                                                                                                                                                                                   │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p embed-certs-902161 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 15:01:34
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 15:01:34.518821  489211 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:01:34.518965  489211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:01:34.518978  489211 out.go:374] Setting ErrFile to fd 2...
	I1121 15:01:34.518995  489211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:01:34.519290  489211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:01:34.519782  489211 out.go:368] Setting JSON to false
	I1121 15:01:34.520926  489211 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9846,"bootTime":1763727448,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 15:01:34.521005  489211 start.go:143] virtualization:  
	I1121 15:01:34.524888  489211 out.go:179] * [default-k8s-diff-port-124330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 15:01:34.527865  489211 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 15:01:34.527893  489211 notify.go:221] Checking for updates...
	I1121 15:01:34.533682  489211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 15:01:34.536667  489211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:01:34.539621  489211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 15:01:34.542684  489211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 15:01:34.545776  489211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 15:01:34.549303  489211 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:01:34.549413  489211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 15:01:34.585070  489211 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 15:01:34.585222  489211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:01:34.646355  489211 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 15:01:34.636685223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:01:34.646472  489211 docker.go:319] overlay module found
	I1121 15:01:34.649696  489211 out.go:179] * Using the docker driver based on user configuration
	I1121 15:01:34.652749  489211 start.go:309] selected driver: docker
	I1121 15:01:34.652772  489211 start.go:930] validating driver "docker" against <nil>
	I1121 15:01:34.652787  489211 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 15:01:34.653554  489211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:01:34.707372  489211 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 15:01:34.697928377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:01:34.707551  489211 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 15:01:34.707792  489211 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:01:34.710715  489211 out.go:179] * Using Docker driver with root privileges
	I1121 15:01:34.713561  489211 cni.go:84] Creating CNI manager for ""
	I1121 15:01:34.713636  489211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:01:34.713650  489211 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 15:01:34.713742  489211 start.go:353] cluster config:
	{Name:default-k8s-diff-port-124330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:01:34.718567  489211 out.go:179] * Starting "default-k8s-diff-port-124330" primary control-plane node in "default-k8s-diff-port-124330" cluster
	I1121 15:01:34.721395  489211 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 15:01:34.724348  489211 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 15:01:34.727386  489211 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:01:34.727461  489211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 15:01:34.727462  489211 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 15:01:34.727623  489211 cache.go:65] Caching tarball of preloaded images
	I1121 15:01:34.727705  489211 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 15:01:34.727723  489211 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 15:01:34.727831  489211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/config.json ...
	I1121 15:01:34.727849  489211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/config.json: {Name:mk3c8c84e35051431e94986c3f53c898136e093e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:01:34.748016  489211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 15:01:34.748039  489211 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 15:01:34.748053  489211 cache.go:243] Successfully downloaded all kic artifacts
	I1121 15:01:34.748076  489211 start.go:360] acquireMachinesLock for default-k8s-diff-port-124330: {Name:mk8c422fee3dc1ab576ba87a9b21326872d469a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 15:01:34.748186  489211 start.go:364] duration metric: took 88.075µs to acquireMachinesLock for "default-k8s-diff-port-124330"
	I1121 15:01:34.748215  489211 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-124330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:01:34.748290  489211 start.go:125] createHost starting for "" (driver="docker")
	I1121 15:01:34.751664  489211 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 15:01:34.751914  489211 start.go:159] libmachine.API.Create for "default-k8s-diff-port-124330" (driver="docker")
	I1121 15:01:34.751967  489211 client.go:173] LocalClient.Create starting
	I1121 15:01:34.752065  489211 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem
	I1121 15:01:34.752107  489211 main.go:143] libmachine: Decoding PEM data...
	I1121 15:01:34.752128  489211 main.go:143] libmachine: Parsing certificate...
	I1121 15:01:34.752185  489211 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem
	I1121 15:01:34.752208  489211 main.go:143] libmachine: Decoding PEM data...
	I1121 15:01:34.752218  489211 main.go:143] libmachine: Parsing certificate...
	I1121 15:01:34.752671  489211 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-124330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 15:01:34.769414  489211 cli_runner.go:211] docker network inspect default-k8s-diff-port-124330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 15:01:34.769507  489211 network_create.go:284] running [docker network inspect default-k8s-diff-port-124330] to gather additional debugging logs...
	I1121 15:01:34.769531  489211 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-124330
	W1121 15:01:34.785351  489211 cli_runner.go:211] docker network inspect default-k8s-diff-port-124330 returned with exit code 1
	I1121 15:01:34.785386  489211 network_create.go:287] error running [docker network inspect default-k8s-diff-port-124330]: docker network inspect default-k8s-diff-port-124330: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-124330 not found
	I1121 15:01:34.785401  489211 network_create.go:289] output of [docker network inspect default-k8s-diff-port-124330]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-124330 not found
	
	** /stderr **
	I1121 15:01:34.785514  489211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 15:01:34.804072  489211 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-82d3b8bc8a36 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:46:f3:82:e8:95} reservation:<nil>}
	I1121 15:01:34.804511  489211 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-741c868a6917 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:04:b7:a7:98:dc} reservation:<nil>}
	I1121 15:01:34.804765  489211 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-047a1ecabae6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:eb:03:dd:6a:cd} reservation:<nil>}
	I1121 15:01:34.805062  489211 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-353a1d7977a8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:d6:61:83:05:3c} reservation:<nil>}
	I1121 15:01:34.805588  489211 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a5c790}
	I1121 15:01:34.805611  489211 network_create.go:124] attempt to create docker network default-k8s-diff-port-124330 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1121 15:01:34.805677  489211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-124330 default-k8s-diff-port-124330
	I1121 15:01:34.871978  489211 network_create.go:108] docker network default-k8s-diff-port-124330 192.168.85.0/24 created
	I1121 15:01:34.872015  489211 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-124330" container
	I1121 15:01:34.872109  489211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 15:01:34.890059  489211 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-124330 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-124330 --label created_by.minikube.sigs.k8s.io=true
	I1121 15:01:34.908010  489211 oci.go:103] Successfully created a docker volume default-k8s-diff-port-124330
	I1121 15:01:34.908118  489211 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-124330-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-124330 --entrypoint /usr/bin/test -v default-k8s-diff-port-124330:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 15:01:35.489259  489211 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-124330
	I1121 15:01:35.489337  489211 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:01:35.489353  489211 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 15:01:35.489420  489211 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-124330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 15:01:39.935235  489211 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-124330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.445745839s)
	I1121 15:01:39.935269  489211 kic.go:203] duration metric: took 4.445912577s to extract preloaded images to volume ...
	W1121 15:01:39.935419  489211 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 15:01:39.935546  489211 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 15:01:39.993793  489211 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-124330 --name default-k8s-diff-port-124330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-124330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-124330 --network default-k8s-diff-port-124330 --ip 192.168.85.2 --volume default-k8s-diff-port-124330:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 15:01:40.343252  489211 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Running}}
	I1121 15:01:40.364006  489211 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:01:40.397536  489211 cli_runner.go:164] Run: docker exec default-k8s-diff-port-124330 stat /var/lib/dpkg/alternatives/iptables
	I1121 15:01:40.444911  489211 oci.go:144] the created container "default-k8s-diff-port-124330" has a running status.
	I1121 15:01:40.444938  489211 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa...
	I1121 15:01:40.926525  489211 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 15:01:40.950744  489211 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:01:40.991069  489211 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 15:01:40.991088  489211 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-124330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 15:01:41.052958  489211 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:01:41.086212  489211 machine.go:94] provisionDockerMachine start ...
	I1121 15:01:41.086316  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:01:41.111020  489211 main.go:143] libmachine: Using SSH client type: native
	I1121 15:01:41.111356  489211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1121 15:01:41.111369  489211 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 15:01:41.112146  489211 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 15:01:44.264038  489211 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124330
	
	I1121 15:01:44.264061  489211 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-124330"
	I1121 15:01:44.264138  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:01:44.282849  489211 main.go:143] libmachine: Using SSH client type: native
	I1121 15:01:44.283311  489211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1121 15:01:44.283330  489211 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-124330 && echo "default-k8s-diff-port-124330" | sudo tee /etc/hostname
	I1121 15:01:44.451922  489211 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124330
	
	I1121 15:01:44.452008  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:01:44.477076  489211 main.go:143] libmachine: Using SSH client type: native
	I1121 15:01:44.477397  489211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1121 15:01:44.477416  489211 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-124330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-124330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-124330' | sudo tee -a /etc/hosts; 
				fi
			fi
	
	
	==> CRI-O <==
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.429288233Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e052929b-1725-4644-b8da-1cb5b4941988 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.430416411Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d5060d00-516d-486e-80f7-95b49df60ef9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.430512724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.43804297Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.438228671Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7412d9bb2ab96b1780ac05537baf9131ebdbe033970b86ab9562ab38072da76b/merged/etc/passwd: no such file or directory"
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.438258907Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7412d9bb2ab96b1780ac05537baf9131ebdbe033970b86ab9562ab38072da76b/merged/etc/group: no such file or directory"
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.438517142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.454884185Z" level=info msg="Created container dbcc098f64f64a45ff8ebb087821fbc5ba58ac0688bfe6746b560d5f89603466: kube-system/storage-provisioner/storage-provisioner" id=d5060d00-516d-486e-80f7-95b49df60ef9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.455947804Z" level=info msg="Starting container: dbcc098f64f64a45ff8ebb087821fbc5ba58ac0688bfe6746b560d5f89603466" id=fa28ff76-d522-4161-a89e-822a43f7f8d1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.457466057Z" level=info msg="Started container" PID=1651 containerID=dbcc098f64f64a45ff8ebb087821fbc5ba58ac0688bfe6746b560d5f89603466 description=kube-system/storage-provisioner/storage-provisioner id=fa28ff76-d522-4161-a89e-822a43f7f8d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8340300a303b53ca127c01906bc5bb6ebcc80e54c442e7cf31d1ae0f73d034c
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.022319452Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.027205819Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.027384873Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.027479808Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.036290299Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.036327961Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.0382362Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.046237328Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.04630722Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.046359077Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.051233005Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.051269625Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.051289129Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.056559394Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.056744717Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	dbcc098f64f64       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   e8340300a303b       storage-provisioner                          kube-system
	1b46439195479       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   79dde5e9b8a4f       dashboard-metrics-scraper-6ffb444bf9-ztccv   kubernetes-dashboard
	e0241230681c8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   d0f47c6d3178c       kubernetes-dashboard-855c9754f9-rlwns        kubernetes-dashboard
	0058c9b1eab6f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   309acb12594df       busybox                                      default
	20114e40488fe       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   ad69edfff0d93       coredns-66bc5c9577-gttll                     kube-system
	71685dc5dca64       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   e8340300a303b       storage-provisioner                          kube-system
	fb6a440f09965       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   f314ea602a96d       kube-proxy-wkbb9                             kube-system
	f3a6175482209       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   7ff4b9250b21c       kindnet-9zs98                                kube-system
	293c832724412       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   40d7f235bbe71       kube-apiserver-embed-certs-902161            kube-system
	7f311233a0597       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   2ae02f79a8e04       kube-controller-manager-embed-certs-902161   kube-system
	f8022ed115d2d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0a292a6ecfb3e       kube-scheduler-embed-certs-902161            kube-system
	0040362a6ed65       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   dd4bd76222908       etcd-embed-certs-902161                      kube-system
	
	
	==> coredns [20114e40488feaa7304dc637c72903ffa7244761bc47c1d43f62ba4230d0cac2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56919 - 38638 "HINFO IN 1406632567858009987.1671126665361810321. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005848388s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-902161
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-902161
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=embed-certs-902161
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_59_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:59:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-902161
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 15:01:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 15:01:27 +0000   Fri, 21 Nov 2025 14:59:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 15:01:27 +0000   Fri, 21 Nov 2025 14:59:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 15:01:27 +0000   Fri, 21 Nov 2025 14:59:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 15:01:27 +0000   Fri, 21 Nov 2025 15:00:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-902161
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                889cdbbe-ffd5-4f2f-86b7-0117a83246d8
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-gttll                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m26s
	  kube-system                 etcd-embed-certs-902161                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m32s
	  kube-system                 kindnet-9zs98                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-embed-certs-902161             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-embed-certs-902161    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-wkbb9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-embed-certs-902161             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ztccv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rlwns         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m25s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Warning  CgroupV1                 2m44s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m43s (x8 over 2m43s)  kubelet          Node embed-certs-902161 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m43s (x8 over 2m43s)  kubelet          Node embed-certs-902161 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m43s (x8 over 2m43s)  kubelet          Node embed-certs-902161 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m32s                  kubelet          Node embed-certs-902161 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m32s                  kubelet          Node embed-certs-902161 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s                  kubelet          Node embed-certs-902161 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m32s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m28s                  node-controller  Node embed-certs-902161 event: Registered Node embed-certs-902161 in Controller
	  Normal   NodeReady                105s                   kubelet          Node embed-certs-902161 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node embed-certs-902161 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node embed-certs-902161 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node embed-certs-902161 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node embed-certs-902161 event: Registered Node embed-certs-902161 in Controller
	
	
	==> dmesg <==
	[Nov21 14:36] overlayfs: idmapped layers are currently not supported
	[Nov21 14:37] overlayfs: idmapped layers are currently not supported
	[Nov21 14:39] overlayfs: idmapped layers are currently not supported
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	[Nov21 14:58] overlayfs: idmapped layers are currently not supported
	[Nov21 14:59] overlayfs: idmapped layers are currently not supported
	[Nov21 15:00] overlayfs: idmapped layers are currently not supported
	[ +13.392744] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0040362a6ed65610771d229f1c844dc6fd8551a599ac1712dfac5b502944fa4e] <==
	{"level":"warn","ts":"2025-11-21T15:00:44.482969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.515660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.566259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.596650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.624568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.663654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.725017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.740892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.757136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.813438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.830919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.851210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.881104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.921521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.950615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.978905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.014117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.036108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.058555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.088484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.119910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.172719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.208231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.256650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.466356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41248","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:01:46 up  2:44,  0 user,  load average: 3.03, 3.19, 2.69
	Linux embed-certs-902161 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3a6175482209ccf02f79087fa28bee4117be57f12c7cf3f8d5ec9e1a96bc72a] <==
	I1121 15:00:48.743906       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 15:00:48.770853       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 15:00:48.771004       1 main.go:148] setting mtu 1500 for CNI 
	I1121 15:00:48.771018       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 15:00:48.771032       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T15:00:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 15:00:49.021875       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 15:00:49.021915       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 15:00:49.021925       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 15:00:49.022312       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 15:01:19.022137       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 15:01:19.022140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 15:01:19.022258       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 15:01:19.022269       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1121 15:01:20.222550       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 15:01:20.222590       1 metrics.go:72] Registering metrics
	I1121 15:01:20.222645       1 controller.go:711] "Syncing nftables rules"
	I1121 15:01:29.021994       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 15:01:29.022056       1 main.go:301] handling current node
	I1121 15:01:39.026783       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 15:01:39.026821       1 main.go:301] handling current node
	
	
	==> kube-apiserver [293c832724412d175c6e8ec646f8f5a753d6137d6354da90fcdc7748544c0176] <==
	I1121 15:00:47.340106       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 15:00:47.340112       1 cache.go:39] Caches are synced for autoregister controller
	I1121 15:00:47.351107       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1121 15:00:47.351356       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1121 15:00:47.351396       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 15:00:47.369946       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 15:00:47.370331       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1121 15:00:47.370904       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 15:00:47.382173       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 15:00:47.382207       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 15:00:47.394933       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 15:00:47.381899       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 15:00:47.400565       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1121 15:00:47.529808       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 15:00:47.765007       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 15:00:47.984587       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 15:00:49.294452       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 15:00:49.630275       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 15:00:49.798437       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 15:00:49.897439       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 15:00:50.154119       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.76.53"}
	I1121 15:00:50.182857       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.3.94"}
	I1121 15:00:50.892315       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 15:00:51.084709       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 15:00:51.130310       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7f311233a0597f06fd619eca3d2076efd29a59099af3f91b2e7ad174953bec43] <==
	I1121 15:00:50.857612       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 15:00:50.857645       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 15:00:50.864584       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 15:00:50.864976       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:00:50.865252       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:00:50.871596       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 15:00:50.871952       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 15:00:50.873346       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 15:00:50.873572       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 15:00:50.879114       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 15:00:50.879643       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 15:00:50.882191       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 15:00:50.882433       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 15:00:50.884649       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 15:00:50.895276       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 15:00:50.895455       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 15:00:50.896252       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-902161"
	I1121 15:00:50.896345       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1121 15:00:50.895517       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 15:00:50.904009       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 15:00:50.905530       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 15:00:50.908403       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:00:50.922513       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 15:00:50.928358       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 15:00:50.932107       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [fb6a440f0996534be16e2f3149870c14ebb2b9d6295a9f4a022fc7d662c2cb56] <==
	I1121 15:00:50.258698       1 server_linux.go:53] "Using iptables proxy"
	I1121 15:00:50.439881       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 15:00:50.540204       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 15:00:50.540310       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1121 15:00:50.540450       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 15:00:50.722732       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 15:00:50.722858       1 server_linux.go:132] "Using iptables Proxier"
	I1121 15:00:50.851275       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 15:00:50.851772       1 server.go:527] "Version info" version="v1.34.1"
	I1121 15:00:50.851828       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:00:50.914836       1 config.go:200] "Starting service config controller"
	I1121 15:00:50.934095       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 15:00:50.919194       1 config.go:106] "Starting endpoint slice config controller"
	I1121 15:00:50.941939       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 15:00:50.946151       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 15:00:50.933868       1 config.go:309] "Starting node config controller"
	I1121 15:00:50.946376       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 15:00:50.946408       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 15:00:50.919236       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 15:00:50.946489       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 15:00:51.044809       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 15:00:51.047076       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f8022ed115d2dc50106d1d8099fe151f9220a20d78fad121bb27fe4d5d278763] <==
	I1121 15:00:43.637551       1 serving.go:386] Generated self-signed cert in-memory
	W1121 15:00:46.911679       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 15:00:46.911708       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 15:00:46.911718       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 15:00:46.911736       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 15:00:47.299332       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 15:00:47.299358       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:00:47.301810       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 15:00:47.301911       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:00:47.301929       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:00:47.301949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 15:00:47.408668       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 15:00:51 embed-certs-902161 kubelet[785]: I1121 15:00:51.286211     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9e943b24-8b8a-4b65-9a6f-5b327676335b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ztccv\" (UID: \"9e943b24-8b8a-4b65-9a6f-5b327676335b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ztccv"
	Nov 21 15:00:51 embed-certs-902161 kubelet[785]: I1121 15:00:51.386827     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/81841e6d-5253-428b-8f5f-98af5f095bfc-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rlwns\" (UID: \"81841e6d-5253-428b-8f5f-98af5f095bfc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlwns"
	Nov 21 15:00:51 embed-certs-902161 kubelet[785]: I1121 15:00:51.386896     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbsn\" (UniqueName: \"kubernetes.io/projected/81841e6d-5253-428b-8f5f-98af5f095bfc-kube-api-access-qfbsn\") pod \"kubernetes-dashboard-855c9754f9-rlwns\" (UID: \"81841e6d-5253-428b-8f5f-98af5f095bfc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlwns"
	Nov 21 15:00:51 embed-certs-902161 kubelet[785]: W1121 15:00:51.613841     785 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/crio-79dde5e9b8a4fc00b45699273b1acc7adaf1e0ea5e0752317a3d254b370d1b14 WatchSource:0}: Error finding container 79dde5e9b8a4fc00b45699273b1acc7adaf1e0ea5e0752317a3d254b370d1b14: Status 404 returned error can't find the container with id 79dde5e9b8a4fc00b45699273b1acc7adaf1e0ea5e0752317a3d254b370d1b14
	Nov 21 15:00:51 embed-certs-902161 kubelet[785]: W1121 15:00:51.647574     785 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/crio-d0f47c6d3178c545f8c691f7a9afbb93bafa59d62a812e2d8e162edc43551d0f WatchSource:0}: Error finding container d0f47c6d3178c545f8c691f7a9afbb93bafa59d62a812e2d8e162edc43551d0f: Status 404 returned error can't find the container with id d0f47c6d3178c545f8c691f7a9afbb93bafa59d62a812e2d8e162edc43551d0f
	Nov 21 15:00:57 embed-certs-902161 kubelet[785]: I1121 15:00:57.301804     785 scope.go:117] "RemoveContainer" containerID="8441a3fd1cbfe4703e47a5a2fbbfce80e821f3b96cb65c3678c2a05dacf173e0"
	Nov 21 15:00:57 embed-certs-902161 kubelet[785]: I1121 15:00:57.990471     785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 21 15:00:58 embed-certs-902161 kubelet[785]: I1121 15:00:58.306967     785 scope.go:117] "RemoveContainer" containerID="8441a3fd1cbfe4703e47a5a2fbbfce80e821f3b96cb65c3678c2a05dacf173e0"
	Nov 21 15:00:58 embed-certs-902161 kubelet[785]: I1121 15:00:58.307483     785 scope.go:117] "RemoveContainer" containerID="550fa2b54ad7db592179e3a0b07717ab50fb40e6156fd27ff534f6563463cdee"
	Nov 21 15:00:58 embed-certs-902161 kubelet[785]: E1121 15:00:58.307646     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ztccv_kubernetes-dashboard(9e943b24-8b8a-4b65-9a6f-5b327676335b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ztccv" podUID="9e943b24-8b8a-4b65-9a6f-5b327676335b"
	Nov 21 15:01:01 embed-certs-902161 kubelet[785]: I1121 15:01:01.577770     785 scope.go:117] "RemoveContainer" containerID="550fa2b54ad7db592179e3a0b07717ab50fb40e6156fd27ff534f6563463cdee"
	Nov 21 15:01:01 embed-certs-902161 kubelet[785]: E1121 15:01:01.577967     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ztccv_kubernetes-dashboard(9e943b24-8b8a-4b65-9a6f-5b327676335b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ztccv" podUID="9e943b24-8b8a-4b65-9a6f-5b327676335b"
	Nov 21 15:01:16 embed-certs-902161 kubelet[785]: I1121 15:01:16.051336     785 scope.go:117] "RemoveContainer" containerID="550fa2b54ad7db592179e3a0b07717ab50fb40e6156fd27ff534f6563463cdee"
	Nov 21 15:01:16 embed-certs-902161 kubelet[785]: I1121 15:01:16.413531     785 scope.go:117] "RemoveContainer" containerID="550fa2b54ad7db592179e3a0b07717ab50fb40e6156fd27ff534f6563463cdee"
	Nov 21 15:01:16 embed-certs-902161 kubelet[785]: I1121 15:01:16.413820     785 scope.go:117] "RemoveContainer" containerID="1b46439195479c002a2b7a0455d409a8d6e3b2b1b3864f183f8851bfa47b9f16"
	Nov 21 15:01:16 embed-certs-902161 kubelet[785]: E1121 15:01:16.413980     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ztccv_kubernetes-dashboard(9e943b24-8b8a-4b65-9a6f-5b327676335b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ztccv" podUID="9e943b24-8b8a-4b65-9a6f-5b327676335b"
	Nov 21 15:01:16 embed-certs-902161 kubelet[785]: I1121 15:01:16.436602     785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlwns" podStartSLOduration=15.48148553 podStartE2EDuration="25.436584926s" podCreationTimestamp="2025-11-21 15:00:51 +0000 UTC" firstStartedPulling="2025-11-21 15:00:51.651612009 +0000 UTC m=+13.019206277" lastFinishedPulling="2025-11-21 15:01:01.606711405 +0000 UTC m=+22.974305673" observedRunningTime="2025-11-21 15:01:02.404417147 +0000 UTC m=+23.772011522" watchObservedRunningTime="2025-11-21 15:01:16.436584926 +0000 UTC m=+37.804179194"
	Nov 21 15:01:20 embed-certs-902161 kubelet[785]: I1121 15:01:20.426799     785 scope.go:117] "RemoveContainer" containerID="71685dc5dca648adc7e487e76ac64efb2d3c2f7323ace890db6b8e8cce320c72"
	Nov 21 15:01:21 embed-certs-902161 kubelet[785]: I1121 15:01:21.577795     785 scope.go:117] "RemoveContainer" containerID="1b46439195479c002a2b7a0455d409a8d6e3b2b1b3864f183f8851bfa47b9f16"
	Nov 21 15:01:21 embed-certs-902161 kubelet[785]: E1121 15:01:21.578015     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ztccv_kubernetes-dashboard(9e943b24-8b8a-4b65-9a6f-5b327676335b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ztccv" podUID="9e943b24-8b8a-4b65-9a6f-5b327676335b"
	Nov 21 15:01:33 embed-certs-902161 kubelet[785]: I1121 15:01:33.050987     785 scope.go:117] "RemoveContainer" containerID="1b46439195479c002a2b7a0455d409a8d6e3b2b1b3864f183f8851bfa47b9f16"
	Nov 21 15:01:33 embed-certs-902161 kubelet[785]: E1121 15:01:33.051651     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ztccv_kubernetes-dashboard(9e943b24-8b8a-4b65-9a6f-5b327676335b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ztccv" podUID="9e943b24-8b8a-4b65-9a6f-5b327676335b"
	Nov 21 15:01:43 embed-certs-902161 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 15:01:43 embed-certs-902161 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 15:01:43 embed-certs-902161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e0241230681c85d418bfb39b23677a3797d0bbaea46d71be8cd3986fe0435074] <==
	2025/11/21 15:01:01 Using namespace: kubernetes-dashboard
	2025/11/21 15:01:01 Using in-cluster config to connect to apiserver
	2025/11/21 15:01:01 Using secret token for csrf signing
	2025/11/21 15:01:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 15:01:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 15:01:01 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 15:01:01 Generating JWE encryption key
	2025/11/21 15:01:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 15:01:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 15:01:02 Initializing JWE encryption key from synchronized object
	2025/11/21 15:01:02 Creating in-cluster Sidecar client
	2025/11/21 15:01:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 15:01:02 Serving insecurely on HTTP port: 9090
	2025/11/21 15:01:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 15:01:01 Starting overwatch
	
	
	==> storage-provisioner [71685dc5dca648adc7e487e76ac64efb2d3c2f7323ace890db6b8e8cce320c72] <==
	I1121 15:00:49.887297       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 15:01:19.946186       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [dbcc098f64f64a45ff8ebb087821fbc5ba58ac0688bfe6746b560d5f89603466] <==
	I1121 15:01:20.471455       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 15:01:20.484511       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 15:01:20.484635       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 15:01:20.488299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:23.944228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:28.205448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:31.803579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:34.856780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:37.879455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:37.887907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:01:37.888095       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 15:01:37.888329       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-902161_75135da3-7e02-4dec-9f01-45123bd887ff!
	I1121 15:01:37.889620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ebbc0454-23a8-4831-a020-9201f95f5437", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-902161_75135da3-7e02-4dec-9f01-45123bd887ff became leader
	W1121 15:01:37.901948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:37.918001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:01:37.988990       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-902161_75135da3-7e02-4dec-9f01-45123bd887ff!
	W1121 15:01:39.921442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:39.926832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:41.930734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:41.938018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:43.941498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:43.946128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:45.949439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:45.955177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-902161 -n embed-certs-902161
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-902161 -n embed-certs-902161: exit status 2 (457.232992ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-902161 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-902161
helpers_test.go:243: (dbg) docker inspect embed-certs-902161:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46",
	        "Created": "2025-11-21T14:58:43.65271767Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 485103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T15:00:31.015218535Z",
	            "FinishedAt": "2025-11-21T15:00:29.796855533Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/hostname",
	        "HostsPath": "/var/lib/docker/containers/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/hosts",
	        "LogPath": "/var/lib/docker/containers/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46-json.log",
	        "Name": "/embed-certs-902161",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-902161:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-902161",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46",
	                "LowerDir": "/var/lib/docker/overlay2/b655fbbd9ad31e0c4853ba9d67f87de572b3d8773fd103fccc5932eb2e963585-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b655fbbd9ad31e0c4853ba9d67f87de572b3d8773fd103fccc5932eb2e963585/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b655fbbd9ad31e0c4853ba9d67f87de572b3d8773fd103fccc5932eb2e963585/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b655fbbd9ad31e0c4853ba9d67f87de572b3d8773fd103fccc5932eb2e963585/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-902161",
	                "Source": "/var/lib/docker/volumes/embed-certs-902161/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-902161",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-902161",
	                "name.minikube.sigs.k8s.io": "embed-certs-902161",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1e1a6c8aeb120ac15c57bc53872bfc1872ae2afd91257b745f151916436bfd8",
	            "SandboxKey": "/var/run/docker/netns/b1e1a6c8aeb1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-902161": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:c1:88:2b:00:77",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "353a1d7977a8c37987b78fe82de2605299d1e2de5a9662311c657d4b51a465bb",
	                    "EndpointID": "7edae16e2f1e2f1beb1ee26838e9d7fd843694252c0a71c4218e953ee0609e20",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-902161",
	                        "38e73448071a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-902161 -n embed-certs-902161
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-902161 -n embed-certs-902161: exit status 2 (466.896261ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-902161 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-902161 logs -n 25: (1.75514959s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:57 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p cert-expiration-304879 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ image   │ old-k8s-version-357479 image list --format=json                                                                                                                                                                                               │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ pause   │ -p old-k8s-version-357479 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │                     │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p cert-expiration-304879                                                                                                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 15:00 UTC │
	│ delete  │ -p disable-driver-mounts-984933                                                                                                                                                                                                               │ disable-driver-mounts-984933 │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-844780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ stop    │ -p no-preload-844780 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ addons  │ enable metrics-server -p embed-certs-902161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-844780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ stop    │ -p embed-certs-902161 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-902161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ image   │ no-preload-844780 image list --format=json                                                                                                                                                                                                    │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p no-preload-844780 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ image   │ embed-certs-902161 image list --format=json                                                                                                                                                                                                   │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p embed-certs-902161 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 15:01:34
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 15:01:34.518821  489211 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:01:34.518965  489211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:01:34.518978  489211 out.go:374] Setting ErrFile to fd 2...
	I1121 15:01:34.518995  489211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:01:34.519290  489211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:01:34.519782  489211 out.go:368] Setting JSON to false
	I1121 15:01:34.520926  489211 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9846,"bootTime":1763727448,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 15:01:34.521005  489211 start.go:143] virtualization:  
	I1121 15:01:34.524888  489211 out.go:179] * [default-k8s-diff-port-124330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 15:01:34.527865  489211 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 15:01:34.527893  489211 notify.go:221] Checking for updates...
	I1121 15:01:34.533682  489211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 15:01:34.536667  489211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:01:34.539621  489211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 15:01:34.542684  489211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 15:01:34.545776  489211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 15:01:34.549303  489211 config.go:182] Loaded profile config "embed-certs-902161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:01:34.549413  489211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 15:01:34.585070  489211 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 15:01:34.585222  489211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:01:34.646355  489211 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 15:01:34.636685223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:01:34.646472  489211 docker.go:319] overlay module found
	I1121 15:01:34.649696  489211 out.go:179] * Using the docker driver based on user configuration
	I1121 15:01:34.652749  489211 start.go:309] selected driver: docker
	I1121 15:01:34.652772  489211 start.go:930] validating driver "docker" against <nil>
	I1121 15:01:34.652787  489211 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 15:01:34.653554  489211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:01:34.707372  489211 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 15:01:34.697928377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:01:34.707551  489211 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 15:01:34.707792  489211 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:01:34.710715  489211 out.go:179] * Using Docker driver with root privileges
	I1121 15:01:34.713561  489211 cni.go:84] Creating CNI manager for ""
	I1121 15:01:34.713636  489211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:01:34.713650  489211 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 15:01:34.713742  489211 start.go:353] cluster config:
	{Name:default-k8s-diff-port-124330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:01:34.718567  489211 out.go:179] * Starting "default-k8s-diff-port-124330" primary control-plane node in "default-k8s-diff-port-124330" cluster
	I1121 15:01:34.721395  489211 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 15:01:34.724348  489211 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 15:01:34.727386  489211 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:01:34.727461  489211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 15:01:34.727462  489211 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 15:01:34.727623  489211 cache.go:65] Caching tarball of preloaded images
	I1121 15:01:34.727705  489211 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 15:01:34.727723  489211 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 15:01:34.727831  489211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/config.json ...
	I1121 15:01:34.727849  489211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/config.json: {Name:mk3c8c84e35051431e94986c3f53c898136e093e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:01:34.748016  489211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 15:01:34.748039  489211 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 15:01:34.748053  489211 cache.go:243] Successfully downloaded all kic artifacts
	I1121 15:01:34.748076  489211 start.go:360] acquireMachinesLock for default-k8s-diff-port-124330: {Name:mk8c422fee3dc1ab576ba87a9b21326872d469a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 15:01:34.748186  489211 start.go:364] duration metric: took 88.075µs to acquireMachinesLock for "default-k8s-diff-port-124330"
	I1121 15:01:34.748215  489211 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-124330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:01:34.748290  489211 start.go:125] createHost starting for "" (driver="docker")
	I1121 15:01:34.751664  489211 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 15:01:34.751914  489211 start.go:159] libmachine.API.Create for "default-k8s-diff-port-124330" (driver="docker")
	I1121 15:01:34.751967  489211 client.go:173] LocalClient.Create starting
	I1121 15:01:34.752065  489211 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem
	I1121 15:01:34.752107  489211 main.go:143] libmachine: Decoding PEM data...
	I1121 15:01:34.752128  489211 main.go:143] libmachine: Parsing certificate...
	I1121 15:01:34.752185  489211 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem
	I1121 15:01:34.752208  489211 main.go:143] libmachine: Decoding PEM data...
	I1121 15:01:34.752218  489211 main.go:143] libmachine: Parsing certificate...
	I1121 15:01:34.752671  489211 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-124330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 15:01:34.769414  489211 cli_runner.go:211] docker network inspect default-k8s-diff-port-124330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 15:01:34.769507  489211 network_create.go:284] running [docker network inspect default-k8s-diff-port-124330] to gather additional debugging logs...
	I1121 15:01:34.769531  489211 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-124330
	W1121 15:01:34.785351  489211 cli_runner.go:211] docker network inspect default-k8s-diff-port-124330 returned with exit code 1
	I1121 15:01:34.785386  489211 network_create.go:287] error running [docker network inspect default-k8s-diff-port-124330]: docker network inspect default-k8s-diff-port-124330: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-124330 not found
	I1121 15:01:34.785401  489211 network_create.go:289] output of [docker network inspect default-k8s-diff-port-124330]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-124330 not found
	
	** /stderr **
	I1121 15:01:34.785514  489211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 15:01:34.804072  489211 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-82d3b8bc8a36 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:46:f3:82:e8:95} reservation:<nil>}
	I1121 15:01:34.804511  489211 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-741c868a6917 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:04:b7:a7:98:dc} reservation:<nil>}
	I1121 15:01:34.804765  489211 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-047a1ecabae6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:eb:03:dd:6a:cd} reservation:<nil>}
	I1121 15:01:34.805062  489211 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-353a1d7977a8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:d6:61:83:05:3c} reservation:<nil>}
	I1121 15:01:34.805588  489211 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a5c790}
	I1121 15:01:34.805611  489211 network_create.go:124] attempt to create docker network default-k8s-diff-port-124330 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1121 15:01:34.805677  489211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-124330 default-k8s-diff-port-124330
	I1121 15:01:34.871978  489211 network_create.go:108] docker network default-k8s-diff-port-124330 192.168.85.0/24 created
	I1121 15:01:34.872015  489211 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-124330" container
	I1121 15:01:34.872109  489211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 15:01:34.890059  489211 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-124330 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-124330 --label created_by.minikube.sigs.k8s.io=true
	I1121 15:01:34.908010  489211 oci.go:103] Successfully created a docker volume default-k8s-diff-port-124330
	I1121 15:01:34.908118  489211 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-124330-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-124330 --entrypoint /usr/bin/test -v default-k8s-diff-port-124330:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 15:01:35.489259  489211 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-124330
	I1121 15:01:35.489337  489211 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:01:35.489353  489211 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 15:01:35.489420  489211 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-124330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 15:01:39.935235  489211 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-124330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.445745839s)
	I1121 15:01:39.935269  489211 kic.go:203] duration metric: took 4.445912577s to extract preloaded images to volume ...
	W1121 15:01:39.935419  489211 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 15:01:39.935546  489211 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 15:01:39.993793  489211 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-124330 --name default-k8s-diff-port-124330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-124330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-124330 --network default-k8s-diff-port-124330 --ip 192.168.85.2 --volume default-k8s-diff-port-124330:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 15:01:40.343252  489211 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Running}}
	I1121 15:01:40.364006  489211 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:01:40.397536  489211 cli_runner.go:164] Run: docker exec default-k8s-diff-port-124330 stat /var/lib/dpkg/alternatives/iptables
	I1121 15:01:40.444911  489211 oci.go:144] the created container "default-k8s-diff-port-124330" has a running status.
	I1121 15:01:40.444938  489211 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa...
	I1121 15:01:40.926525  489211 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 15:01:40.950744  489211 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:01:40.991069  489211 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 15:01:40.991088  489211 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-124330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 15:01:41.052958  489211 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:01:41.086212  489211 machine.go:94] provisionDockerMachine start ...
	I1121 15:01:41.086316  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:01:41.111020  489211 main.go:143] libmachine: Using SSH client type: native
	I1121 15:01:41.111356  489211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1121 15:01:41.111369  489211 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 15:01:41.112146  489211 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 15:01:44.264038  489211 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124330
	
	I1121 15:01:44.264061  489211 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-124330"
	I1121 15:01:44.264138  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:01:44.282849  489211 main.go:143] libmachine: Using SSH client type: native
	I1121 15:01:44.283311  489211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1121 15:01:44.283330  489211 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-124330 && echo "default-k8s-diff-port-124330" | sudo tee /etc/hostname
	I1121 15:01:44.451922  489211 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124330
	
	I1121 15:01:44.452008  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:01:44.477076  489211 main.go:143] libmachine: Using SSH client type: native
	I1121 15:01:44.477397  489211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1121 15:01:44.477416  489211 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-124330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-124330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-124330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 15:01:44.640773  489211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 15:01:44.640850  489211 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 15:01:44.640885  489211 ubuntu.go:190] setting up certificates
	I1121 15:01:44.640924  489211 provision.go:84] configureAuth start
	I1121 15:01:44.641018  489211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124330
	I1121 15:01:44.665478  489211 provision.go:143] copyHostCerts
	I1121 15:01:44.665544  489211 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 15:01:44.665554  489211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 15:01:44.665643  489211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 15:01:44.665727  489211 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 15:01:44.665732  489211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 15:01:44.665757  489211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 15:01:44.665803  489211 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 15:01:44.665808  489211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 15:01:44.665830  489211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 15:01:44.665875  489211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-124330 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-124330 localhost minikube]
	I1121 15:01:44.951856  489211 provision.go:177] copyRemoteCerts
	I1121 15:01:44.951935  489211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 15:01:44.951975  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:01:44.974089  489211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:01:45.128034  489211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 15:01:45.239057  489211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1121 15:01:45.316047  489211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 15:01:45.355946  489211 provision.go:87] duration metric: took 714.976121ms to configureAuth
	I1121 15:01:45.356042  489211 ubuntu.go:206] setting minikube options for container-runtime
	I1121 15:01:45.356292  489211 config.go:182] Loaded profile config "default-k8s-diff-port-124330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:01:45.356561  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:01:45.385157  489211 main.go:143] libmachine: Using SSH client type: native
	I1121 15:01:45.385478  489211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1121 15:01:45.385493  489211 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 15:01:45.838100  489211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 15:01:45.838121  489211 machine.go:97] duration metric: took 4.751887752s to provisionDockerMachine
	I1121 15:01:45.838130  489211 client.go:176] duration metric: took 11.086151437s to LocalClient.Create
	I1121 15:01:45.838144  489211 start.go:167] duration metric: took 11.086231455s to libmachine.API.Create "default-k8s-diff-port-124330"
	I1121 15:01:45.838151  489211 start.go:293] postStartSetup for "default-k8s-diff-port-124330" (driver="docker")
	I1121 15:01:45.838161  489211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 15:01:45.838226  489211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 15:01:45.838268  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:01:45.869031  489211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:01:45.979278  489211 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 15:01:45.983262  489211 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 15:01:45.983290  489211 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 15:01:45.983301  489211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 15:01:45.983353  489211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 15:01:45.983436  489211 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 15:01:45.983537  489211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 15:01:45.997276  489211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:01:46.026362  489211 start.go:296] duration metric: took 188.194744ms for postStartSetup
	I1121 15:01:46.026823  489211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124330
	I1121 15:01:46.047634  489211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/config.json ...
	I1121 15:01:46.050270  489211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 15:01:46.050329  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:01:46.072973  489211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:01:46.186092  489211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 15:01:46.191917  489211 start.go:128] duration metric: took 11.443603234s to createHost
	I1121 15:01:46.191943  489211 start.go:83] releasing machines lock for "default-k8s-diff-port-124330", held for 11.44374438s
	I1121 15:01:46.192030  489211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124330
	I1121 15:01:46.215745  489211 ssh_runner.go:195] Run: cat /version.json
	I1121 15:01:46.215815  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:01:46.216075  489211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 15:01:46.216129  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:01:46.244947  489211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:01:46.248522  489211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:01:46.466467  489211 ssh_runner.go:195] Run: systemctl --version
	I1121 15:01:46.475057  489211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 15:01:46.528105  489211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 15:01:46.534722  489211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 15:01:46.534854  489211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 15:01:46.569717  489211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 15:01:46.569742  489211 start.go:496] detecting cgroup driver to use...
	I1121 15:01:46.569803  489211 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 15:01:46.569876  489211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 15:01:46.591441  489211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 15:01:46.606767  489211 docker.go:218] disabling cri-docker service (if available) ...
	I1121 15:01:46.606907  489211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 15:01:46.629418  489211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 15:01:46.651987  489211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 15:01:46.795400  489211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 15:01:46.951974  489211 docker.go:234] disabling docker service ...
	I1121 15:01:46.952047  489211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 15:01:46.987430  489211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 15:01:47.003828  489211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 15:01:47.179339  489211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 15:01:47.337780  489211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 15:01:47.350974  489211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 15:01:47.368905  489211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 15:01:47.368985  489211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:01:47.380196  489211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 15:01:47.380284  489211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:01:47.394420  489211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:01:47.410282  489211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:01:47.428639  489211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 15:01:47.451217  489211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:01:47.463712  489211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:01:47.487555  489211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:01:47.498017  489211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 15:01:47.507340  489211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 15:01:47.516129  489211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:01:47.707790  489211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 15:01:47.915658  489211 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 15:01:47.915733  489211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 15:01:47.920726  489211 start.go:564] Will wait 60s for crictl version
	I1121 15:01:47.920796  489211 ssh_runner.go:195] Run: which crictl
	I1121 15:01:47.924737  489211 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 15:01:47.970453  489211 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 15:01:47.970561  489211 ssh_runner.go:195] Run: crio --version
	I1121 15:01:48.012032  489211 ssh_runner.go:195] Run: crio --version
	I1121 15:01:48.055893  489211 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	
	
	==> CRI-O <==
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.429288233Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e052929b-1725-4644-b8da-1cb5b4941988 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.430416411Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d5060d00-516d-486e-80f7-95b49df60ef9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.430512724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.43804297Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.438228671Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7412d9bb2ab96b1780ac05537baf9131ebdbe033970b86ab9562ab38072da76b/merged/etc/passwd: no such file or directory"
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.438258907Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7412d9bb2ab96b1780ac05537baf9131ebdbe033970b86ab9562ab38072da76b/merged/etc/group: no such file or directory"
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.438517142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.454884185Z" level=info msg="Created container dbcc098f64f64a45ff8ebb087821fbc5ba58ac0688bfe6746b560d5f89603466: kube-system/storage-provisioner/storage-provisioner" id=d5060d00-516d-486e-80f7-95b49df60ef9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.455947804Z" level=info msg="Starting container: dbcc098f64f64a45ff8ebb087821fbc5ba58ac0688bfe6746b560d5f89603466" id=fa28ff76-d522-4161-a89e-822a43f7f8d1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:01:20 embed-certs-902161 crio[653]: time="2025-11-21T15:01:20.457466057Z" level=info msg="Started container" PID=1651 containerID=dbcc098f64f64a45ff8ebb087821fbc5ba58ac0688bfe6746b560d5f89603466 description=kube-system/storage-provisioner/storage-provisioner id=fa28ff76-d522-4161-a89e-822a43f7f8d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8340300a303b53ca127c01906bc5bb6ebcc80e54c442e7cf31d1ae0f73d034c
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.022319452Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.027205819Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.027384873Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.027479808Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.036290299Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.036327961Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.0382362Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.046237328Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.04630722Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.046359077Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.051233005Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.051269625Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.051289129Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.056559394Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:01:29 embed-certs-902161 crio[653]: time="2025-11-21T15:01:29.056744717Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	dbcc098f64f64       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   e8340300a303b       storage-provisioner                          kube-system
	1b46439195479       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   79dde5e9b8a4f       dashboard-metrics-scraper-6ffb444bf9-ztccv   kubernetes-dashboard
	e0241230681c8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   d0f47c6d3178c       kubernetes-dashboard-855c9754f9-rlwns        kubernetes-dashboard
	0058c9b1eab6f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   309acb12594df       busybox                                      default
	20114e40488fe       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   ad69edfff0d93       coredns-66bc5c9577-gttll                     kube-system
	71685dc5dca64       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   e8340300a303b       storage-provisioner                          kube-system
	fb6a440f09965       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   f314ea602a96d       kube-proxy-wkbb9                             kube-system
	f3a6175482209       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   7ff4b9250b21c       kindnet-9zs98                                kube-system
	293c832724412       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   40d7f235bbe71       kube-apiserver-embed-certs-902161            kube-system
	7f311233a0597       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   2ae02f79a8e04       kube-controller-manager-embed-certs-902161   kube-system
	f8022ed115d2d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0a292a6ecfb3e       kube-scheduler-embed-certs-902161            kube-system
	0040362a6ed65       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   dd4bd76222908       etcd-embed-certs-902161                      kube-system
	
	
	==> coredns [20114e40488feaa7304dc637c72903ffa7244761bc47c1d43f62ba4230d0cac2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56919 - 38638 "HINFO IN 1406632567858009987.1671126665361810321. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005848388s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-902161
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-902161
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=embed-certs-902161
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_59_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:59:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-902161
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 15:01:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 15:01:27 +0000   Fri, 21 Nov 2025 14:59:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 15:01:27 +0000   Fri, 21 Nov 2025 14:59:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 15:01:27 +0000   Fri, 21 Nov 2025 14:59:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 15:01:27 +0000   Fri, 21 Nov 2025 15:00:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-902161
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                889cdbbe-ffd5-4f2f-86b7-0117a83246d8
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-gttll                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m29s
	  kube-system                 etcd-embed-certs-902161                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m35s
	  kube-system                 kindnet-9zs98                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m30s
	  kube-system                 kube-apiserver-embed-certs-902161             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-controller-manager-embed-certs-902161    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-proxy-wkbb9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-scheduler-embed-certs-902161             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ztccv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rlwns         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m28s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Warning  CgroupV1                 2m47s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m46s (x8 over 2m46s)  kubelet          Node embed-certs-902161 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x8 over 2m46s)  kubelet          Node embed-certs-902161 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x8 over 2m46s)  kubelet          Node embed-certs-902161 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m35s                  kubelet          Node embed-certs-902161 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m35s                  kubelet          Node embed-certs-902161 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s                  kubelet          Node embed-certs-902161 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m31s                  node-controller  Node embed-certs-902161 event: Registered Node embed-certs-902161 in Controller
	  Normal   NodeReady                108s                   kubelet          Node embed-certs-902161 status is now: NodeReady
	  Normal   Starting                 71s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node embed-certs-902161 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node embed-certs-902161 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node embed-certs-902161 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                    node-controller  Node embed-certs-902161 event: Registered Node embed-certs-902161 in Controller
	
	
	==> dmesg <==
	[Nov21 14:36] overlayfs: idmapped layers are currently not supported
	[Nov21 14:37] overlayfs: idmapped layers are currently not supported
	[Nov21 14:39] overlayfs: idmapped layers are currently not supported
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	[Nov21 14:58] overlayfs: idmapped layers are currently not supported
	[Nov21 14:59] overlayfs: idmapped layers are currently not supported
	[Nov21 15:00] overlayfs: idmapped layers are currently not supported
	[ +13.392744] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0040362a6ed65610771d229f1c844dc6fd8551a599ac1712dfac5b502944fa4e] <==
	{"level":"warn","ts":"2025-11-21T15:00:44.482969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.515660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.566259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.596650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.624568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.663654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.725017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.740892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.757136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.813438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.830919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.851210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.881104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.921521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.950615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:44.978905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.014117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.036108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.058555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.088484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.119910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.172719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.208231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.256650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:00:45.466356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41248","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:01:49 up  2:44,  0 user,  load average: 3.02, 3.19, 2.69
	Linux embed-certs-902161 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3a6175482209ccf02f79087fa28bee4117be57f12c7cf3f8d5ec9e1a96bc72a] <==
	I1121 15:00:48.743906       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 15:00:48.770853       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 15:00:48.771004       1 main.go:148] setting mtu 1500 for CNI 
	I1121 15:00:48.771018       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 15:00:48.771032       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T15:00:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 15:00:49.021875       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 15:00:49.021915       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 15:00:49.021925       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 15:00:49.022312       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 15:01:19.022137       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 15:01:19.022140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 15:01:19.022258       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 15:01:19.022269       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1121 15:01:20.222550       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 15:01:20.222590       1 metrics.go:72] Registering metrics
	I1121 15:01:20.222645       1 controller.go:711] "Syncing nftables rules"
	I1121 15:01:29.021994       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 15:01:29.022056       1 main.go:301] handling current node
	I1121 15:01:39.026783       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 15:01:39.026821       1 main.go:301] handling current node
	I1121 15:01:49.036430       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 15:01:49.036465       1 main.go:301] handling current node
	
	
	==> kube-apiserver [293c832724412d175c6e8ec646f8f5a753d6137d6354da90fcdc7748544c0176] <==
	I1121 15:00:47.340106       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 15:00:47.340112       1 cache.go:39] Caches are synced for autoregister controller
	I1121 15:00:47.351107       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1121 15:00:47.351356       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1121 15:00:47.351396       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 15:00:47.369946       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 15:00:47.370331       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1121 15:00:47.370904       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 15:00:47.382173       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 15:00:47.382207       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 15:00:47.394933       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 15:00:47.381899       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 15:00:47.400565       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1121 15:00:47.529808       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 15:00:47.765007       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 15:00:47.984587       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 15:00:49.294452       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 15:00:49.630275       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 15:00:49.798437       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 15:00:49.897439       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 15:00:50.154119       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.76.53"}
	I1121 15:00:50.182857       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.3.94"}
	I1121 15:00:50.892315       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 15:00:51.084709       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 15:00:51.130310       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7f311233a0597f06fd619eca3d2076efd29a59099af3f91b2e7ad174953bec43] <==
	I1121 15:00:50.857612       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 15:00:50.857645       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 15:00:50.864584       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 15:00:50.864976       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:00:50.865252       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:00:50.871596       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 15:00:50.871952       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 15:00:50.873346       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 15:00:50.873572       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 15:00:50.879114       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 15:00:50.879643       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 15:00:50.882191       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 15:00:50.882433       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 15:00:50.884649       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 15:00:50.895276       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 15:00:50.895455       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 15:00:50.896252       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-902161"
	I1121 15:00:50.896345       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1121 15:00:50.895517       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 15:00:50.904009       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 15:00:50.905530       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 15:00:50.908403       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:00:50.922513       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 15:00:50.928358       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 15:00:50.932107       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [fb6a440f0996534be16e2f3149870c14ebb2b9d6295a9f4a022fc7d662c2cb56] <==
	I1121 15:00:50.258698       1 server_linux.go:53] "Using iptables proxy"
	I1121 15:00:50.439881       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 15:00:50.540204       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 15:00:50.540310       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1121 15:00:50.540450       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 15:00:50.722732       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 15:00:50.722858       1 server_linux.go:132] "Using iptables Proxier"
	I1121 15:00:50.851275       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 15:00:50.851772       1 server.go:527] "Version info" version="v1.34.1"
	I1121 15:00:50.851828       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:00:50.914836       1 config.go:200] "Starting service config controller"
	I1121 15:00:50.934095       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 15:00:50.919194       1 config.go:106] "Starting endpoint slice config controller"
	I1121 15:00:50.941939       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 15:00:50.946151       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 15:00:50.933868       1 config.go:309] "Starting node config controller"
	I1121 15:00:50.946376       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 15:00:50.946408       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 15:00:50.919236       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 15:00:50.946489       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 15:00:51.044809       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 15:00:51.047076       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f8022ed115d2dc50106d1d8099fe151f9220a20d78fad121bb27fe4d5d278763] <==
	I1121 15:00:43.637551       1 serving.go:386] Generated self-signed cert in-memory
	W1121 15:00:46.911679       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 15:00:46.911708       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 15:00:46.911718       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 15:00:46.911736       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 15:00:47.299332       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 15:00:47.299358       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:00:47.301810       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 15:00:47.301911       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:00:47.301929       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:00:47.301949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 15:00:47.408668       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 15:00:51 embed-certs-902161 kubelet[785]: I1121 15:00:51.286211     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9e943b24-8b8a-4b65-9a6f-5b327676335b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ztccv\" (UID: \"9e943b24-8b8a-4b65-9a6f-5b327676335b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ztccv"
	Nov 21 15:00:51 embed-certs-902161 kubelet[785]: I1121 15:00:51.386827     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/81841e6d-5253-428b-8f5f-98af5f095bfc-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rlwns\" (UID: \"81841e6d-5253-428b-8f5f-98af5f095bfc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlwns"
	Nov 21 15:00:51 embed-certs-902161 kubelet[785]: I1121 15:00:51.386896     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbsn\" (UniqueName: \"kubernetes.io/projected/81841e6d-5253-428b-8f5f-98af5f095bfc-kube-api-access-qfbsn\") pod \"kubernetes-dashboard-855c9754f9-rlwns\" (UID: \"81841e6d-5253-428b-8f5f-98af5f095bfc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlwns"
	Nov 21 15:00:51 embed-certs-902161 kubelet[785]: W1121 15:00:51.613841     785 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/crio-79dde5e9b8a4fc00b45699273b1acc7adaf1e0ea5e0752317a3d254b370d1b14 WatchSource:0}: Error finding container 79dde5e9b8a4fc00b45699273b1acc7adaf1e0ea5e0752317a3d254b370d1b14: Status 404 returned error can't find the container with id 79dde5e9b8a4fc00b45699273b1acc7adaf1e0ea5e0752317a3d254b370d1b14
	Nov 21 15:00:51 embed-certs-902161 kubelet[785]: W1121 15:00:51.647574     785 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/38e73448071a48b03d566b59118d066f1f1cf6679f7d57d3a6148100f89c7a46/crio-d0f47c6d3178c545f8c691f7a9afbb93bafa59d62a812e2d8e162edc43551d0f WatchSource:0}: Error finding container d0f47c6d3178c545f8c691f7a9afbb93bafa59d62a812e2d8e162edc43551d0f: Status 404 returned error can't find the container with id d0f47c6d3178c545f8c691f7a9afbb93bafa59d62a812e2d8e162edc43551d0f
	Nov 21 15:00:57 embed-certs-902161 kubelet[785]: I1121 15:00:57.301804     785 scope.go:117] "RemoveContainer" containerID="8441a3fd1cbfe4703e47a5a2fbbfce80e821f3b96cb65c3678c2a05dacf173e0"
	Nov 21 15:00:57 embed-certs-902161 kubelet[785]: I1121 15:00:57.990471     785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 21 15:00:58 embed-certs-902161 kubelet[785]: I1121 15:00:58.306967     785 scope.go:117] "RemoveContainer" containerID="8441a3fd1cbfe4703e47a5a2fbbfce80e821f3b96cb65c3678c2a05dacf173e0"
	Nov 21 15:00:58 embed-certs-902161 kubelet[785]: I1121 15:00:58.307483     785 scope.go:117] "RemoveContainer" containerID="550fa2b54ad7db592179e3a0b07717ab50fb40e6156fd27ff534f6563463cdee"
	Nov 21 15:00:58 embed-certs-902161 kubelet[785]: E1121 15:00:58.307646     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ztccv_kubernetes-dashboard(9e943b24-8b8a-4b65-9a6f-5b327676335b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ztccv" podUID="9e943b24-8b8a-4b65-9a6f-5b327676335b"
	Nov 21 15:01:01 embed-certs-902161 kubelet[785]: I1121 15:01:01.577770     785 scope.go:117] "RemoveContainer" containerID="550fa2b54ad7db592179e3a0b07717ab50fb40e6156fd27ff534f6563463cdee"
	Nov 21 15:01:01 embed-certs-902161 kubelet[785]: E1121 15:01:01.577967     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ztccv_kubernetes-dashboard(9e943b24-8b8a-4b65-9a6f-5b327676335b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ztccv" podUID="9e943b24-8b8a-4b65-9a6f-5b327676335b"
	Nov 21 15:01:16 embed-certs-902161 kubelet[785]: I1121 15:01:16.051336     785 scope.go:117] "RemoveContainer" containerID="550fa2b54ad7db592179e3a0b07717ab50fb40e6156fd27ff534f6563463cdee"
	Nov 21 15:01:16 embed-certs-902161 kubelet[785]: I1121 15:01:16.413531     785 scope.go:117] "RemoveContainer" containerID="550fa2b54ad7db592179e3a0b07717ab50fb40e6156fd27ff534f6563463cdee"
	Nov 21 15:01:16 embed-certs-902161 kubelet[785]: I1121 15:01:16.413820     785 scope.go:117] "RemoveContainer" containerID="1b46439195479c002a2b7a0455d409a8d6e3b2b1b3864f183f8851bfa47b9f16"
	Nov 21 15:01:16 embed-certs-902161 kubelet[785]: E1121 15:01:16.413980     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ztccv_kubernetes-dashboard(9e943b24-8b8a-4b65-9a6f-5b327676335b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ztccv" podUID="9e943b24-8b8a-4b65-9a6f-5b327676335b"
	Nov 21 15:01:16 embed-certs-902161 kubelet[785]: I1121 15:01:16.436602     785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlwns" podStartSLOduration=15.48148553 podStartE2EDuration="25.436584926s" podCreationTimestamp="2025-11-21 15:00:51 +0000 UTC" firstStartedPulling="2025-11-21 15:00:51.651612009 +0000 UTC m=+13.019206277" lastFinishedPulling="2025-11-21 15:01:01.606711405 +0000 UTC m=+22.974305673" observedRunningTime="2025-11-21 15:01:02.404417147 +0000 UTC m=+23.772011522" watchObservedRunningTime="2025-11-21 15:01:16.436584926 +0000 UTC m=+37.804179194"
	Nov 21 15:01:20 embed-certs-902161 kubelet[785]: I1121 15:01:20.426799     785 scope.go:117] "RemoveContainer" containerID="71685dc5dca648adc7e487e76ac64efb2d3c2f7323ace890db6b8e8cce320c72"
	Nov 21 15:01:21 embed-certs-902161 kubelet[785]: I1121 15:01:21.577795     785 scope.go:117] "RemoveContainer" containerID="1b46439195479c002a2b7a0455d409a8d6e3b2b1b3864f183f8851bfa47b9f16"
	Nov 21 15:01:21 embed-certs-902161 kubelet[785]: E1121 15:01:21.578015     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ztccv_kubernetes-dashboard(9e943b24-8b8a-4b65-9a6f-5b327676335b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ztccv" podUID="9e943b24-8b8a-4b65-9a6f-5b327676335b"
	Nov 21 15:01:33 embed-certs-902161 kubelet[785]: I1121 15:01:33.050987     785 scope.go:117] "RemoveContainer" containerID="1b46439195479c002a2b7a0455d409a8d6e3b2b1b3864f183f8851bfa47b9f16"
	Nov 21 15:01:33 embed-certs-902161 kubelet[785]: E1121 15:01:33.051651     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ztccv_kubernetes-dashboard(9e943b24-8b8a-4b65-9a6f-5b327676335b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ztccv" podUID="9e943b24-8b8a-4b65-9a6f-5b327676335b"
	Nov 21 15:01:43 embed-certs-902161 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 15:01:43 embed-certs-902161 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 15:01:43 embed-certs-902161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e0241230681c85d418bfb39b23677a3797d0bbaea46d71be8cd3986fe0435074] <==
	2025/11/21 15:01:01 Using namespace: kubernetes-dashboard
	2025/11/21 15:01:01 Using in-cluster config to connect to apiserver
	2025/11/21 15:01:01 Using secret token for csrf signing
	2025/11/21 15:01:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 15:01:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 15:01:01 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 15:01:01 Generating JWE encryption key
	2025/11/21 15:01:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 15:01:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 15:01:02 Initializing JWE encryption key from synchronized object
	2025/11/21 15:01:02 Creating in-cluster Sidecar client
	2025/11/21 15:01:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 15:01:02 Serving insecurely on HTTP port: 9090
	2025/11/21 15:01:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 15:01:01 Starting overwatch
	
	
	==> storage-provisioner [71685dc5dca648adc7e487e76ac64efb2d3c2f7323ace890db6b8e8cce320c72] <==
	I1121 15:00:49.887297       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 15:01:19.946186       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [dbcc098f64f64a45ff8ebb087821fbc5ba58ac0688bfe6746b560d5f89603466] <==
	I1121 15:01:20.484511       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 15:01:20.484635       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 15:01:20.488299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:23.944228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:28.205448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:31.803579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:34.856780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:37.879455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:37.887907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:01:37.888095       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 15:01:37.888329       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-902161_75135da3-7e02-4dec-9f01-45123bd887ff!
	I1121 15:01:37.889620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ebbc0454-23a8-4831-a020-9201f95f5437", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-902161_75135da3-7e02-4dec-9f01-45123bd887ff became leader
	W1121 15:01:37.901948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:37.918001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:01:37.988990       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-902161_75135da3-7e02-4dec-9f01-45123bd887ff!
	W1121 15:01:39.921442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:39.926832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:41.930734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:41.938018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:43.941498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:43.946128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:45.949439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:45.955177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:47.959180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:01:47.978031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-902161 -n embed-certs-902161
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-902161 -n embed-certs-902161: exit status 2 (484.829515ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-902161 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-714993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-714993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (266.600108ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:02:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-714993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-714993
helpers_test.go:243: (dbg) docker inspect newest-cni-714993:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a",
	        "Created": "2025-11-21T15:02:00.230610086Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T15:02:00.385068662Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/hostname",
	        "HostsPath": "/var/lib/docker/containers/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/hosts",
	        "LogPath": "/var/lib/docker/containers/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a-json.log",
	        "Name": "/newest-cni-714993",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-714993:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-714993",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a",
	                "LowerDir": "/var/lib/docker/overlay2/0f94cb91fca9e0d93d6363f98feac79ea3c7a145b555492488266c975a6945f1-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f94cb91fca9e0d93d6363f98feac79ea3c7a145b555492488266c975a6945f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f94cb91fca9e0d93d6363f98feac79ea3c7a145b555492488266c975a6945f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f94cb91fca9e0d93d6363f98feac79ea3c7a145b555492488266c975a6945f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-714993",
	                "Source": "/var/lib/docker/volumes/newest-cni-714993/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-714993",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-714993",
	                "name.minikube.sigs.k8s.io": "newest-cni-714993",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29943ca0cdd32220133008f8c381434686aa43c71561a2fc56f576bbd91aac82",
	            "SandboxKey": "/var/run/docker/netns/29943ca0cdd3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-714993": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:8a:3c:f1:b1:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1cfea5893685cdf8198d74a4f99484841fa068338f22db34f688b7b58b6435e9",
	                    "EndpointID": "83bdb0057666bcddfcb6f5e2210852c49fab00e753e56caa5ae308d9c0fe0967",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-714993",
	                        "bc5829e976c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-714993 -n newest-cni-714993
E1121 15:02:38.753488  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-714993 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-714993 logs -n 25: (1.157280873s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p cert-expiration-304879                                                                                                                                                                                                                     │ cert-expiration-304879       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ delete  │ -p old-k8s-version-357479                                                                                                                                                                                                                     │ old-k8s-version-357479       │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 15:00 UTC │
	│ delete  │ -p disable-driver-mounts-984933                                                                                                                                                                                                               │ disable-driver-mounts-984933 │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:58 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-844780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ stop    │ -p no-preload-844780 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ addons  │ enable metrics-server -p embed-certs-902161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-844780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ stop    │ -p embed-certs-902161 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-902161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ image   │ no-preload-844780 image list --format=json                                                                                                                                                                                                    │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p no-preload-844780 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ image   │ embed-certs-902161 image list --format=json                                                                                                                                                                                                   │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p embed-certs-902161 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p embed-certs-902161                                                                                                                                                                                                                         │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p embed-certs-902161                                                                                                                                                                                                                         │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:02 UTC │
	│ addons  │ enable metrics-server -p newest-cni-714993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 15:01:53
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 15:01:53.806027  492417 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:01:53.806201  492417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:01:53.806207  492417 out.go:374] Setting ErrFile to fd 2...
	I1121 15:01:53.806211  492417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:01:53.806472  492417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:01:53.806892  492417 out.go:368] Setting JSON to false
	I1121 15:01:53.807835  492417 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9866,"bootTime":1763727448,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 15:01:53.807894  492417 start.go:143] virtualization:  
	I1121 15:01:53.811760  492417 out.go:179] * [newest-cni-714993] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 15:01:53.816097  492417 notify.go:221] Checking for updates...
	I1121 15:01:53.816984  492417 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 15:01:53.821278  492417 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 15:01:53.824424  492417 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:01:53.827515  492417 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 15:01:53.830846  492417 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 15:01:53.833755  492417 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 15:01:53.837177  492417 config.go:182] Loaded profile config "default-k8s-diff-port-124330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:01:53.837339  492417 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 15:01:53.874088  492417 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 15:01:53.874212  492417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:01:53.963952  492417 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 15:01:53.954189872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:01:53.964066  492417 docker.go:319] overlay module found
	I1121 15:01:53.967183  492417 out.go:179] * Using the docker driver based on user configuration
	I1121 15:01:53.972179  492417 start.go:309] selected driver: docker
	I1121 15:01:53.972201  492417 start.go:930] validating driver "docker" against <nil>
	I1121 15:01:53.972214  492417 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 15:01:53.972983  492417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:01:54.083896  492417 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 15:01:54.073385046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:01:54.084061  492417 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1121 15:01:54.084087  492417 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1121 15:01:54.084317  492417 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 15:01:54.087487  492417 out.go:179] * Using Docker driver with root privileges
	I1121 15:01:54.090531  492417 cni.go:84] Creating CNI manager for ""
	I1121 15:01:54.090595  492417 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:01:54.090605  492417 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 15:01:54.090686  492417 start.go:353] cluster config:
	{Name:newest-cni-714993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:01:54.093733  492417 out.go:179] * Starting "newest-cni-714993" primary control-plane node in "newest-cni-714993" cluster
	I1121 15:01:54.096673  492417 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 15:01:54.099528  492417 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 15:01:54.102421  492417 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:01:54.102470  492417 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 15:01:54.102480  492417 cache.go:65] Caching tarball of preloaded images
	I1121 15:01:54.102582  492417 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 15:01:54.102592  492417 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 15:01:54.102734  492417 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/config.json ...
	I1121 15:01:54.102752  492417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/config.json: {Name:mk775563d14cf4fafc7deeeeaabbe6b868d0a901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:01:54.102900  492417 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 15:01:54.129811  492417 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 15:01:54.129832  492417 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 15:01:54.129845  492417 cache.go:243] Successfully downloaded all kic artifacts
	I1121 15:01:54.129867  492417 start.go:360] acquireMachinesLock for newest-cni-714993: {Name:mk4fe5ba68b949796f6324fdcc6a0615ddd88762 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 15:01:54.129962  492417 start.go:364] duration metric: took 80.173µs to acquireMachinesLock for "newest-cni-714993"
	I1121 15:01:54.129986  492417 start.go:93] Provisioning new machine with config: &{Name:newest-cni-714993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:01:54.130063  492417 start.go:125] createHost starting for "" (driver="docker")
	I1121 15:01:51.078470  489211 out.go:252]   - Generating certificates and keys ...
	I1121 15:01:51.078646  489211 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 15:01:51.078748  489211 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 15:01:52.127292  489211 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 15:01:53.355724  489211 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 15:01:53.737228  489211 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 15:01:54.166247  489211 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 15:01:54.133603  492417 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 15:01:54.133851  492417 start.go:159] libmachine.API.Create for "newest-cni-714993" (driver="docker")
	I1121 15:01:54.133882  492417 client.go:173] LocalClient.Create starting
	I1121 15:01:54.133949  492417 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem
	I1121 15:01:54.133980  492417 main.go:143] libmachine: Decoding PEM data...
	I1121 15:01:54.134002  492417 main.go:143] libmachine: Parsing certificate...
	I1121 15:01:54.134056  492417 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem
	I1121 15:01:54.134082  492417 main.go:143] libmachine: Decoding PEM data...
	I1121 15:01:54.134093  492417 main.go:143] libmachine: Parsing certificate...
	I1121 15:01:54.134456  492417 cli_runner.go:164] Run: docker network inspect newest-cni-714993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 15:01:54.153911  492417 cli_runner.go:211] docker network inspect newest-cni-714993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 15:01:54.154018  492417 network_create.go:284] running [docker network inspect newest-cni-714993] to gather additional debugging logs...
	I1121 15:01:54.154036  492417 cli_runner.go:164] Run: docker network inspect newest-cni-714993
	W1121 15:01:54.173964  492417 cli_runner.go:211] docker network inspect newest-cni-714993 returned with exit code 1
	I1121 15:01:54.174010  492417 network_create.go:287] error running [docker network inspect newest-cni-714993]: docker network inspect newest-cni-714993: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-714993 not found
	I1121 15:01:54.174023  492417 network_create.go:289] output of [docker network inspect newest-cni-714993]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-714993 not found
	
	** /stderr **
	I1121 15:01:54.174133  492417 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 15:01:54.194851  492417 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-82d3b8bc8a36 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:46:f3:82:e8:95} reservation:<nil>}
	I1121 15:01:54.195248  492417 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-741c868a6917 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:04:b7:a7:98:dc} reservation:<nil>}
	I1121 15:01:54.195467  492417 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-047a1ecabae6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:eb:03:dd:6a:cd} reservation:<nil>}
	I1121 15:01:54.195871  492417 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b1be0}
	I1121 15:01:54.195894  492417 network_create.go:124] attempt to create docker network newest-cni-714993 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1121 15:01:54.195952  492417 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-714993 newest-cni-714993
	I1121 15:01:54.284110  492417 network_create.go:108] docker network newest-cni-714993 192.168.76.0/24 created
	I1121 15:01:54.284141  492417 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-714993" container
	I1121 15:01:54.284223  492417 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 15:01:54.313968  492417 cli_runner.go:164] Run: docker volume create newest-cni-714993 --label name.minikube.sigs.k8s.io=newest-cni-714993 --label created_by.minikube.sigs.k8s.io=true
	I1121 15:01:54.333201  492417 oci.go:103] Successfully created a docker volume newest-cni-714993
	I1121 15:01:54.333291  492417 cli_runner.go:164] Run: docker run --rm --name newest-cni-714993-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-714993 --entrypoint /usr/bin/test -v newest-cni-714993:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 15:01:55.005644  492417 oci.go:107] Successfully prepared a docker volume newest-cni-714993
	I1121 15:01:55.005769  492417 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:01:55.005780  492417 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 15:01:55.005867  492417 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-714993:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 15:01:54.676888  489211 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 15:01:54.677044  489211 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-124330 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 15:01:55.492846  489211 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 15:01:55.493011  489211 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-124330 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 15:01:55.572800  489211 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 15:01:56.110096  489211 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 15:01:56.568744  489211 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 15:01:56.568836  489211 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 15:01:56.697832  489211 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 15:01:56.939740  489211 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 15:01:57.926238  489211 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 15:01:58.597268  489211 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 15:01:58.899102  489211 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 15:01:58.899807  489211 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 15:01:58.902593  489211 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 15:01:58.942146  489211 out.go:252]   - Booting up control plane ...
	I1121 15:01:58.942282  489211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 15:01:58.942392  489211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 15:01:58.942467  489211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 15:01:58.942582  489211 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 15:01:58.942684  489211 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 15:01:58.942805  489211 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 15:01:58.942903  489211 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 15:01:58.942967  489211 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 15:01:59.067983  489211 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 15:01:59.068126  489211 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 15:02:00.001412  492417 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-714993:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.995502061s)
	I1121 15:02:00.001444  492417 kic.go:203] duration metric: took 4.99566084s to extract preloaded images to volume ...
	W1121 15:02:00.001585  492417 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 15:02:00.001705  492417 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 15:02:00.198618  492417 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-714993 --name newest-cni-714993 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-714993 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-714993 --network newest-cni-714993 --ip 192.168.76.2 --volume newest-cni-714993:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 15:02:00.750520  492417 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Running}}
	I1121 15:02:00.776336  492417 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:00.797278  492417 cli_runner.go:164] Run: docker exec newest-cni-714993 stat /var/lib/dpkg/alternatives/iptables
	I1121 15:02:00.867949  492417 oci.go:144] the created container "newest-cni-714993" has a running status.
	I1121 15:02:00.867977  492417 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa...
	I1121 15:02:01.774779  492417 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 15:02:01.798449  492417 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:01.824546  492417 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 15:02:01.824565  492417 kic_runner.go:114] Args: [docker exec --privileged newest-cni-714993 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 15:02:01.928558  492417 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:01.976821  492417 machine.go:94] provisionDockerMachine start ...
	I1121 15:02:01.976911  492417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:02.012566  492417 main.go:143] libmachine: Using SSH client type: native
	I1121 15:02:02.012925  492417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1121 15:02:02.012935  492417 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 15:02:02.013967  492417 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 15:02:00.087929  489211 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.008384451s
	I1121 15:02:00.088046  489211 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 15:02:00.090913  489211 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1121 15:02:00.091036  489211 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 15:02:00.091125  489211 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 15:02:05.204376  492417 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-714993
	
	I1121 15:02:05.204420  492417 ubuntu.go:182] provisioning hostname "newest-cni-714993"
	I1121 15:02:05.204495  492417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:05.236595  492417 main.go:143] libmachine: Using SSH client type: native
	I1121 15:02:05.236931  492417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1121 15:02:05.236951  492417 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-714993 && echo "newest-cni-714993" | sudo tee /etc/hostname
	I1121 15:02:05.435251  492417 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-714993
	
	I1121 15:02:05.435342  492417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:05.466236  492417 main.go:143] libmachine: Using SSH client type: native
	I1121 15:02:05.466554  492417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1121 15:02:05.466580  492417 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-714993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-714993/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-714993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 15:02:05.649874  492417 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 15:02:05.649905  492417 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 15:02:05.649928  492417 ubuntu.go:190] setting up certificates
	I1121 15:02:05.649938  492417 provision.go:84] configureAuth start
	I1121 15:02:05.650007  492417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-714993
	I1121 15:02:05.681162  492417 provision.go:143] copyHostCerts
	I1121 15:02:05.681235  492417 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 15:02:05.681249  492417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 15:02:05.681329  492417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 15:02:05.681431  492417 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 15:02:05.681442  492417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 15:02:05.681469  492417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 15:02:05.681526  492417 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 15:02:05.681539  492417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 15:02:05.681564  492417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 15:02:05.681619  492417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.newest-cni-714993 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-714993]
	I1121 15:02:06.041813  492417 provision.go:177] copyRemoteCerts
	I1121 15:02:06.041895  492417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 15:02:06.041943  492417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:06.059219  492417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:06.176706  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 15:02:06.212623  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 15:02:06.246340  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 15:02:06.279088  492417 provision.go:87] duration metric: took 629.125665ms to configureAuth
	I1121 15:02:06.279160  492417 ubuntu.go:206] setting minikube options for container-runtime
	I1121 15:02:06.279388  492417 config.go:182] Loaded profile config "newest-cni-714993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:02:06.279621  492417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:06.314877  492417 main.go:143] libmachine: Using SSH client type: native
	I1121 15:02:06.315179  492417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1121 15:02:06.315195  492417 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 15:02:06.688447  492417 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 15:02:06.688474  492417 machine.go:97] duration metric: took 4.711633506s to provisionDockerMachine
	I1121 15:02:06.688492  492417 client.go:176] duration metric: took 12.554604451s to LocalClient.Create
	I1121 15:02:06.688506  492417 start.go:167] duration metric: took 12.554657219s to libmachine.API.Create "newest-cni-714993"
	I1121 15:02:06.688513  492417 start.go:293] postStartSetup for "newest-cni-714993" (driver="docker")
	I1121 15:02:06.688524  492417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 15:02:06.688601  492417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 15:02:06.688673  492417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:06.716096  492417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:06.825615  492417 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 15:02:06.829815  492417 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 15:02:06.829857  492417 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 15:02:06.829868  492417 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 15:02:06.829929  492417 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 15:02:06.830038  492417 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 15:02:06.830169  492417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 15:02:06.842481  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:02:06.866318  492417 start.go:296] duration metric: took 177.788833ms for postStartSetup
	I1121 15:02:06.866730  492417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-714993
	I1121 15:02:06.890450  492417 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/config.json ...
	I1121 15:02:06.890754  492417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 15:02:06.890808  492417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:06.917847  492417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:07.022526  492417 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 15:02:07.030415  492417 start.go:128] duration metric: took 12.900335824s to createHost
	I1121 15:02:07.030439  492417 start.go:83] releasing machines lock for "newest-cni-714993", held for 12.900468994s
	I1121 15:02:07.030513  492417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-714993
	I1121 15:02:07.065045  492417 ssh_runner.go:195] Run: cat /version.json
	I1121 15:02:07.065103  492417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:07.065682  492417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 15:02:07.065753  492417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:07.104555  492417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:07.112590  492417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:07.220363  492417 ssh_runner.go:195] Run: systemctl --version
	I1121 15:02:07.344085  492417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 15:02:07.422590  492417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 15:02:07.430375  492417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 15:02:07.430515  492417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 15:02:07.467546  492417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 15:02:07.467629  492417 start.go:496] detecting cgroup driver to use...
	I1121 15:02:07.467693  492417 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 15:02:07.467774  492417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 15:02:07.493180  492417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 15:02:07.515171  492417 docker.go:218] disabling cri-docker service (if available) ...
	I1121 15:02:07.515292  492417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 15:02:07.535914  492417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 15:02:07.565500  492417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 15:02:07.752863  492417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 15:02:07.966880  492417 docker.go:234] disabling docker service ...
	I1121 15:02:07.967001  492417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 15:02:08.012902  492417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 15:02:08.032076  492417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 15:02:08.215591  492417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 15:02:08.391710  492417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 15:02:08.408585  492417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 15:02:08.427194  492417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 15:02:08.427319  492417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:08.436918  492417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 15:02:08.437050  492417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:08.448181  492417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:08.457929  492417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:08.467239  492417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 15:02:08.475941  492417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:08.485285  492417 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:08.502159  492417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:08.511866  492417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 15:02:08.520597  492417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 15:02:08.528852  492417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:02:08.696771  492417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 15:02:08.910307  492417 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 15:02:08.910484  492417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 15:02:08.915132  492417 start.go:564] Will wait 60s for crictl version
	I1121 15:02:08.915300  492417 ssh_runner.go:195] Run: which crictl
	I1121 15:02:08.920653  492417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 15:02:08.983262  492417 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 15:02:08.983418  492417 ssh_runner.go:195] Run: crio --version
	I1121 15:02:09.034753  492417 ssh_runner.go:195] Run: crio --version
	I1121 15:02:09.068341  492417 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 15:02:09.071446  492417 cli_runner.go:164] Run: docker network inspect newest-cni-714993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 15:02:09.094225  492417 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 15:02:09.098577  492417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:02:09.116728  492417 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1121 15:02:06.199763  489211 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.111056857s
	I1121 15:02:09.012839  489211 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.922173207s
	I1121 15:02:09.090545  489211 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.001832647s
	I1121 15:02:09.113296  489211 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 15:02:09.134571  489211 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 15:02:09.157403  489211 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 15:02:09.158021  489211 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-124330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 15:02:09.177198  489211 kubeadm.go:319] [bootstrap-token] Using token: 9h4zwo.23zd3d5xj47thbsz
	I1121 15:02:09.180555  489211 out.go:252]   - Configuring RBAC rules ...
	I1121 15:02:09.180705  489211 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 15:02:09.188589  489211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 15:02:09.197360  489211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 15:02:09.204354  489211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 15:02:09.209324  489211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 15:02:09.214420  489211 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 15:02:09.498864  489211 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 15:02:09.960755  489211 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 15:02:10.513577  489211 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 15:02:10.516695  489211 kubeadm.go:319] 
	I1121 15:02:10.516855  489211 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 15:02:10.516871  489211 kubeadm.go:319] 
	I1121 15:02:10.516955  489211 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 15:02:10.516959  489211 kubeadm.go:319] 
	I1121 15:02:10.516986  489211 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 15:02:10.517048  489211 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 15:02:10.517102  489211 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 15:02:10.517106  489211 kubeadm.go:319] 
	I1121 15:02:10.517163  489211 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 15:02:10.517168  489211 kubeadm.go:319] 
	I1121 15:02:10.517218  489211 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 15:02:10.517223  489211 kubeadm.go:319] 
	I1121 15:02:10.517278  489211 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 15:02:10.517357  489211 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 15:02:10.517429  489211 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 15:02:10.517433  489211 kubeadm.go:319] 
	I1121 15:02:10.517522  489211 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 15:02:10.517603  489211 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 15:02:10.517608  489211 kubeadm.go:319] 
	I1121 15:02:10.517696  489211 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 9h4zwo.23zd3d5xj47thbsz \
	I1121 15:02:10.517810  489211 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 \
	I1121 15:02:10.517833  489211 kubeadm.go:319] 	--control-plane 
	I1121 15:02:10.517838  489211 kubeadm.go:319] 
	I1121 15:02:10.517927  489211 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 15:02:10.517931  489211 kubeadm.go:319] 
	I1121 15:02:10.518017  489211 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 9h4zwo.23zd3d5xj47thbsz \
	I1121 15:02:10.518135  489211 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 
	I1121 15:02:10.522074  489211 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 15:02:10.522467  489211 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 15:02:10.522613  489211 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 15:02:10.522628  489211 cni.go:84] Creating CNI manager for ""
	I1121 15:02:10.522636  489211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:02:10.527555  489211 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 15:02:09.119552  492417 kubeadm.go:884] updating cluster {Name:newest-cni-714993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 15:02:09.119699  492417 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:02:09.119783  492417 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:02:09.179351  492417 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:02:09.179372  492417 crio.go:433] Images already preloaded, skipping extraction
	I1121 15:02:09.179433  492417 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:02:09.221025  492417 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:02:09.221045  492417 cache_images.go:86] Images are preloaded, skipping loading
	I1121 15:02:09.221053  492417 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1121 15:02:09.221143  492417 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-714993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 15:02:09.221238  492417 ssh_runner.go:195] Run: crio config
	I1121 15:02:09.289888  492417 cni.go:84] Creating CNI manager for ""
	I1121 15:02:09.289912  492417 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:02:09.289929  492417 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1121 15:02:09.289953  492417 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-714993 NodeName:newest-cni-714993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 15:02:09.290084  492417 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-714993"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 15:02:09.290162  492417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 15:02:09.299962  492417 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 15:02:09.300033  492417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 15:02:09.308711  492417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1121 15:02:09.322888  492417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 15:02:09.336344  492417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1121 15:02:09.351713  492417 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 15:02:09.355506  492417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:02:09.366532  492417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:02:09.531786  492417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:02:09.551778  492417 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993 for IP: 192.168.76.2
	I1121 15:02:09.551801  492417 certs.go:195] generating shared ca certs ...
	I1121 15:02:09.551817  492417 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:09.551955  492417 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 15:02:09.552001  492417 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 15:02:09.552023  492417 certs.go:257] generating profile certs ...
	I1121 15:02:09.552080  492417 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/client.key
	I1121 15:02:09.552098  492417 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/client.crt with IP's: []
	I1121 15:02:10.228784  492417 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/client.crt ...
	I1121 15:02:10.228857  492417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/client.crt: {Name:mk9bb33a662747f9cca97bfd40f6e109e74f1bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:10.229078  492417 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/client.key ...
	I1121 15:02:10.229092  492417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/client.key: {Name:mk469a106c629ec6dd9a6860afee9dedfd6d3db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:10.229177  492417 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.key.90646b61
	I1121 15:02:10.229190  492417 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.crt.90646b61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1121 15:02:10.589331  492417 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.crt.90646b61 ...
	I1121 15:02:10.589404  492417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.crt.90646b61: {Name:mk57b7dd8f32bef741169898666a35e8ebf7d527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:10.589658  492417 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.key.90646b61 ...
	I1121 15:02:10.589693  492417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.key.90646b61: {Name:mk9029f6f6ca556b69b1398acc7d2cec91e51692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:10.589842  492417 certs.go:382] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.crt.90646b61 -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.crt
	I1121 15:02:10.589985  492417 certs.go:386] copying /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.key.90646b61 -> /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.key
	I1121 15:02:10.590106  492417 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.key
	I1121 15:02:10.590145  492417 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.crt with IP's: []
	I1121 15:02:11.652493  492417 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.crt ...
	I1121 15:02:11.652567  492417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.crt: {Name:mkdd75e6e842c72b589cd0d6cff2c2d5fc3beadd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:11.652786  492417 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.key ...
	I1121 15:02:11.652825  492417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.key: {Name:mkc0733963c2eb9abf96dbd931e6a47be486f7a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:11.653054  492417 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 15:02:11.653126  492417 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 15:02:11.653162  492417 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 15:02:11.653209  492417 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 15:02:11.653260  492417 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 15:02:11.653307  492417 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 15:02:11.653386  492417 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:02:11.654002  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 15:02:11.678280  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 15:02:11.698339  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 15:02:11.716197  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 15:02:11.735554  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 15:02:11.755157  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 15:02:11.774600  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 15:02:11.795464  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 15:02:11.814321  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 15:02:11.836938  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 15:02:11.859087  492417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 15:02:11.878157  492417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 15:02:11.895801  492417 ssh_runner.go:195] Run: openssl version
	I1121 15:02:11.902693  492417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 15:02:11.912254  492417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 15:02:11.916406  492417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 15:02:11.916480  492417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 15:02:11.964979  492417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 15:02:11.974506  492417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 15:02:11.988638  492417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 15:02:11.993126  492417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 15:02:11.993193  492417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 15:02:12.038046  492417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 15:02:12.047384  492417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 15:02:12.056177  492417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:02:12.060106  492417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:02:12.060174  492417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:02:12.102122  492417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 15:02:12.110506  492417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 15:02:12.114101  492417 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 15:02:12.114153  492417 kubeadm.go:401] StartCluster: {Name:newest-cni-714993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:02:12.114243  492417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 15:02:12.114303  492417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 15:02:12.143473  492417 cri.go:89] found id: ""
	I1121 15:02:12.143561  492417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 15:02:12.152755  492417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 15:02:12.161109  492417 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 15:02:12.161178  492417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 15:02:12.171913  492417 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 15:02:12.171933  492417 kubeadm.go:158] found existing configuration files:
	
	I1121 15:02:12.172018  492417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 15:02:12.180372  492417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 15:02:12.180544  492417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 15:02:12.189982  492417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 15:02:12.197910  492417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 15:02:12.197976  492417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 15:02:12.205974  492417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 15:02:12.213812  492417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 15:02:12.213877  492417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 15:02:12.222607  492417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 15:02:12.232824  492417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 15:02:12.232899  492417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 15:02:12.243212  492417 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 15:02:12.303713  492417 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 15:02:12.303815  492417 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 15:02:12.336859  492417 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 15:02:12.336935  492417 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 15:02:12.336973  492417 kubeadm.go:319] OS: Linux
	I1121 15:02:12.337033  492417 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 15:02:12.337089  492417 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 15:02:12.337144  492417 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 15:02:12.337199  492417 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 15:02:12.337252  492417 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 15:02:12.337312  492417 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 15:02:12.337363  492417 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 15:02:12.337419  492417 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 15:02:12.337474  492417 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 15:02:12.473187  492417 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 15:02:12.473375  492417 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 15:02:12.473528  492417 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 15:02:12.488782  492417 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 15:02:12.493926  492417 out.go:252]   - Generating certificates and keys ...
	I1121 15:02:12.494140  492417 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 15:02:12.494286  492417 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 15:02:10.534400  489211 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 15:02:10.542175  489211 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 15:02:10.542193  489211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 15:02:10.598587  489211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 15:02:11.036137  489211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 15:02:11.036278  489211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:11.036365  489211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-124330 minikube.k8s.io/updated_at=2025_11_21T15_02_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=default-k8s-diff-port-124330 minikube.k8s.io/primary=true
	I1121 15:02:11.387328  489211 ops.go:34] apiserver oom_adj: -16
	I1121 15:02:11.387436  489211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:11.887553  489211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:12.387557  489211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:12.887643  489211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:13.388166  489211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:13.887582  489211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:14.388487  489211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:14.888440  489211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:15.387624  489211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:15.646703  489211 kubeadm.go:1114] duration metric: took 4.610471757s to wait for elevateKubeSystemPrivileges
	I1121 15:02:15.646729  489211 kubeadm.go:403] duration metric: took 24.959079246s to StartCluster
	I1121 15:02:15.646745  489211 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:15.646806  489211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:02:15.647476  489211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:15.647688  489211 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:02:15.647841  489211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 15:02:15.648098  489211 config.go:182] Loaded profile config "default-k8s-diff-port-124330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:02:15.648131  489211 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 15:02:15.648196  489211 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-124330"
	I1121 15:02:15.648210  489211 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-124330"
	I1121 15:02:15.648231  489211 host.go:66] Checking if "default-k8s-diff-port-124330" exists ...
	I1121 15:02:15.648740  489211 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:02:15.649028  489211 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-124330"
	I1121 15:02:15.649047  489211 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-124330"
	I1121 15:02:15.649297  489211 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:02:15.651224  489211 out.go:179] * Verifying Kubernetes components...
	I1121 15:02:15.654406  489211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:02:15.692047  489211 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 15:02:15.692272  489211 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-124330"
	I1121 15:02:15.692312  489211 host.go:66] Checking if "default-k8s-diff-port-124330" exists ...
	I1121 15:02:15.692752  489211 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:02:15.695134  489211 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:02:15.695154  489211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 15:02:15.695215  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:02:15.731664  489211 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 15:02:15.731689  489211 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 15:02:15.731752  489211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:02:15.740581  489211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:02:15.766008  489211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:02:16.309032  489211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 15:02:16.313355  489211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:02:16.315024  489211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:02:16.315044  489211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 15:02:17.705884  489211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.392493182s)
	I1121 15:02:17.706144  489211 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.391075122s)
	I1121 15:02:17.706164  489211 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 15:02:17.706421  489211 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.391378083s)
	I1121 15:02:17.707118  489211 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-124330" to be "Ready" ...
	I1121 15:02:17.709930  489211 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1121 15:02:14.159745  492417 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 15:02:14.888070  492417 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 15:02:16.677830  492417 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 15:02:17.629461  492417 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 15:02:17.775426  492417 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 15:02:17.775786  492417 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-714993] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 15:02:18.157689  492417 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 15:02:18.158075  492417 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-714993] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 15:02:18.393669  492417 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 15:02:18.462157  492417 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 15:02:17.712861  489211 addons.go:530] duration metric: took 2.064706734s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1121 15:02:18.211253  489211 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-124330" context rescaled to 1 replicas
	I1121 15:02:19.950867  492417 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 15:02:19.951141  492417 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 15:02:20.249860  492417 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 15:02:21.066253  492417 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 15:02:21.578306  492417 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 15:02:22.228405  492417 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 15:02:22.486533  492417 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 15:02:22.487303  492417 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 15:02:22.490021  492417 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 15:02:22.493406  492417 out.go:252]   - Booting up control plane ...
	I1121 15:02:22.493533  492417 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 15:02:22.493621  492417 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 15:02:22.494095  492417 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 15:02:22.510532  492417 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 15:02:22.510900  492417 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 15:02:22.519084  492417 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 15:02:22.519912  492417 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 15:02:22.520205  492417 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 15:02:22.666565  492417 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 15:02:22.666693  492417 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1121 15:02:19.711581  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	W1121 15:02:22.210462  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	W1121 15:02:24.211788  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	I1121 15:02:24.667386  492417 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001160688s
	I1121 15:02:24.671046  492417 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 15:02:24.671147  492417 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1121 15:02:24.671492  492417 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 15:02:24.671583  492417 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 15:02:27.471787  492417 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.800345429s
	W1121 15:02:26.710223  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	W1121 15:02:28.710428  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	I1121 15:02:29.072063  492417 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.401033174s
	I1121 15:02:30.672537  492417 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001265229s
	I1121 15:02:30.695590  492417 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 15:02:30.720817  492417 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 15:02:30.746632  492417 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 15:02:30.747005  492417 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-714993 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 15:02:30.760592  492417 kubeadm.go:319] [bootstrap-token] Using token: 4tawwp.f3qe7tlnrnpfp3we
	I1121 15:02:30.765574  492417 out.go:252]   - Configuring RBAC rules ...
	I1121 15:02:30.765700  492417 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 15:02:30.772893  492417 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 15:02:30.781222  492417 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 15:02:30.785909  492417 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 15:02:30.794652  492417 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 15:02:30.801486  492417 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 15:02:31.079829  492417 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 15:02:31.521455  492417 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 15:02:32.080660  492417 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 15:02:32.082451  492417 kubeadm.go:319] 
	I1121 15:02:32.082544  492417 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 15:02:32.082556  492417 kubeadm.go:319] 
	I1121 15:02:32.082642  492417 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 15:02:32.082653  492417 kubeadm.go:319] 
	I1121 15:02:32.082680  492417 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 15:02:32.082756  492417 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 15:02:32.082813  492417 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 15:02:32.082820  492417 kubeadm.go:319] 
	I1121 15:02:32.082877  492417 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 15:02:32.082887  492417 kubeadm.go:319] 
	I1121 15:02:32.082937  492417 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 15:02:32.082943  492417 kubeadm.go:319] 
	I1121 15:02:32.082998  492417 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 15:02:32.083079  492417 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 15:02:32.083155  492417 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 15:02:32.083162  492417 kubeadm.go:319] 
	I1121 15:02:32.083277  492417 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 15:02:32.083362  492417 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 15:02:32.083372  492417 kubeadm.go:319] 
	I1121 15:02:32.083461  492417 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4tawwp.f3qe7tlnrnpfp3we \
	I1121 15:02:32.083572  492417 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 \
	I1121 15:02:32.083596  492417 kubeadm.go:319] 	--control-plane 
	I1121 15:02:32.083601  492417 kubeadm.go:319] 
	I1121 15:02:32.083710  492417 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 15:02:32.083721  492417 kubeadm.go:319] 
	I1121 15:02:32.083807  492417 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4tawwp.f3qe7tlnrnpfp3we \
	I1121 15:02:32.083919  492417 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 
	I1121 15:02:32.088835  492417 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 15:02:32.089075  492417 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 15:02:32.089189  492417 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 15:02:32.089209  492417 cni.go:84] Creating CNI manager for ""
	I1121 15:02:32.089217  492417 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:02:32.092529  492417 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 15:02:32.095453  492417 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 15:02:32.099696  492417 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 15:02:32.099721  492417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 15:02:32.117887  492417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 15:02:32.458574  492417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 15:02:32.458717  492417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:32.458786  492417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-714993 minikube.k8s.io/updated_at=2025_11_21T15_02_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=newest-cni-714993 minikube.k8s.io/primary=true
	I1121 15:02:32.701860  492417 ops.go:34] apiserver oom_adj: -16
	I1121 15:02:32.701961  492417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:33.202116  492417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:33.702064  492417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1121 15:02:30.712843  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	W1121 15:02:33.210949  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	I1121 15:02:34.202423  492417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:34.702080  492417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:35.202748  492417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:35.702616  492417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:36.202197  492417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:36.702127  492417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:02:36.816661  492417 kubeadm.go:1114] duration metric: took 4.35799274s to wait for elevateKubeSystemPrivileges
	I1121 15:02:36.816691  492417 kubeadm.go:403] duration metric: took 24.702540441s to StartCluster
	I1121 15:02:36.816708  492417 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:36.816766  492417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:02:36.817716  492417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:36.817909  492417 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:02:36.818053  492417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 15:02:36.818323  492417 config.go:182] Loaded profile config "newest-cni-714993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:02:36.818354  492417 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 15:02:36.818412  492417 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-714993"
	I1121 15:02:36.818425  492417 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-714993"
	I1121 15:02:36.818446  492417 host.go:66] Checking if "newest-cni-714993" exists ...
	I1121 15:02:36.818993  492417 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:36.819329  492417 addons.go:70] Setting default-storageclass=true in profile "newest-cni-714993"
	I1121 15:02:36.819362  492417 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-714993"
	I1121 15:02:36.819642  492417 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:36.823687  492417 out.go:179] * Verifying Kubernetes components...
	I1121 15:02:36.826579  492417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:02:36.864151  492417 addons.go:239] Setting addon default-storageclass=true in "newest-cni-714993"
	I1121 15:02:36.864191  492417 host.go:66] Checking if "newest-cni-714993" exists ...
	I1121 15:02:36.864654  492417 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:36.864825  492417 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 15:02:36.867844  492417 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:02:36.867865  492417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 15:02:36.867921  492417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:36.908679  492417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:36.916317  492417 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 15:02:36.916338  492417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 15:02:36.916436  492417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:36.944348  492417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:37.123234  492417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 15:02:37.167894  492417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:02:37.240738  492417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:02:37.247712  492417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 15:02:37.692696  492417 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1121 15:02:37.693663  492417 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:02:37.694813  492417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:02:38.088302  492417 api_server.go:72] duration metric: took 1.270366081s to wait for apiserver process to appear ...
	I1121 15:02:38.088372  492417 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:02:38.088445  492417 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:02:38.100196  492417 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 15:02:38.101853  492417 api_server.go:141] control plane version: v1.34.1
	I1121 15:02:38.101884  492417 api_server.go:131] duration metric: took 13.452521ms to wait for apiserver health ...
	I1121 15:02:38.101903  492417 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:02:38.107433  492417 system_pods.go:59] 8 kube-system pods found
	I1121 15:02:38.107481  492417 system_pods.go:61] "coredns-66bc5c9577-gg7hh" [9870c48c-8548-4838-8cb4-9174010fdcd0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 15:02:38.107497  492417 system_pods.go:61] "etcd-newest-cni-714993" [c62fc121-98a5-4101-a4f7-b563520e09a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:02:38.107506  492417 system_pods.go:61] "kindnet-jssq6" [da7ab922-ecf7-449c-9aac-481926be6add] Running
	I1121 15:02:38.107520  492417 system_pods.go:61] "kube-apiserver-newest-cni-714993" [a9d0ff40-44af-4f7c-beed-5c0b8061b718] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 15:02:38.107542  492417 system_pods.go:61] "kube-controller-manager-newest-cni-714993" [f3df974c-7a27-41a2-aaae-664da491b689] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 15:02:38.107555  492417 system_pods.go:61] "kube-proxy-jmrq8" [153afdbe-a8ec-43f0-a76c-4b9c81867c6e] Running
	I1121 15:02:38.107563  492417 system_pods.go:61] "kube-scheduler-newest-cni-714993" [257585ec-0c18-4a96-a45d-d924cf069dff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:02:38.107579  492417 system_pods.go:61] "storage-provisioner" [36238ceb-8620-4b93-9a0a-f802b27e8c16] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 15:02:38.107588  492417 system_pods.go:74] duration metric: took 5.679385ms to wait for pod list to return data ...
	I1121 15:02:38.107596  492417 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:02:38.108496  492417 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 15:02:38.121530  492417 addons.go:530] duration metric: took 1.303148281s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 15:02:38.123188  492417 default_sa.go:45] found service account: "default"
	I1121 15:02:38.123218  492417 default_sa.go:55] duration metric: took 15.61436ms for default service account to be created ...
	I1121 15:02:38.123231  492417 kubeadm.go:587] duration metric: took 1.305299591s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 15:02:38.123264  492417 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:02:38.126080  492417 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:02:38.126123  492417 node_conditions.go:123] node cpu capacity is 2
	I1121 15:02:38.126137  492417 node_conditions.go:105] duration metric: took 2.866569ms to run NodePressure ...
	I1121 15:02:38.126168  492417 start.go:242] waiting for startup goroutines ...
	I1121 15:02:38.197840  492417 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-714993" context rescaled to 1 replicas
	I1121 15:02:38.197891  492417 start.go:247] waiting for cluster config update ...
	I1121 15:02:38.197913  492417 start.go:256] writing updated cluster config ...
	I1121 15:02:38.198263  492417 ssh_runner.go:195] Run: rm -f paused
	I1121 15:02:38.264466  492417 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 15:02:38.267602  492417 out.go:179] * Done! kubectl is now configured to use "newest-cni-714993" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.220637024Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.224587847Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5b0dd6af-a0ed-4275-8a42-27148e08f371 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.227757534Z" level=info msg="Ran pod sandbox e27d0714a45dea97ecd522505813bad1b4e6f9ac237dc24832d2f246a703e06c with infra container: kube-system/kindnet-jssq6/POD" id=5b0dd6af-a0ed-4275-8a42-27148e08f371 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.230149118Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b92ba8c7-0f68-4bfd-87ff-75dc5f53cfd2 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.236530969Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8baf714e-f49b-484e-a358-188700db0959 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.255169003Z" level=info msg="Creating container: kube-system/kindnet-jssq6/kindnet-cni" id=24c29ee8-5804-44fe-aba6-e706c0ad2a16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.255275532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.259644001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.26025191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.27322354Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-jmrq8/POD" id=640429cb-d5eb-4c5a-9b13-8d2c96d0657e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.273515424Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.280290518Z" level=info msg="Created container ecb1257b611b329002a989ddba8e3f08db87fe5ca2f6bf0617e6e7c14d27f9db: kube-system/kindnet-jssq6/kindnet-cni" id=24c29ee8-5804-44fe-aba6-e706c0ad2a16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.284045869Z" level=info msg="Starting container: ecb1257b611b329002a989ddba8e3f08db87fe5ca2f6bf0617e6e7c14d27f9db" id=39636ed0-8166-42bb-b2c8-bc4b2db6150c name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.285967056Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=640429cb-d5eb-4c5a-9b13-8d2c96d0657e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.294630952Z" level=info msg="Ran pod sandbox 35d1522eb4f2d4aa46a34e77123232bce0ecde2d39a70d1c4bb65b9da6d46b1f with infra container: kube-system/kube-proxy-jmrq8/POD" id=640429cb-d5eb-4c5a-9b13-8d2c96d0657e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.297327476Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=bbd82582-7a56-422c-8757-5474c5f7f995 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.304975384Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d0b540b3-0145-460e-8cf8-53c36b405be0 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.309325638Z" level=info msg="Started container" PID=1504 containerID=ecb1257b611b329002a989ddba8e3f08db87fe5ca2f6bf0617e6e7c14d27f9db description=kube-system/kindnet-jssq6/kindnet-cni id=39636ed0-8166-42bb-b2c8-bc4b2db6150c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e27d0714a45dea97ecd522505813bad1b4e6f9ac237dc24832d2f246a703e06c
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.311340513Z" level=info msg="Creating container: kube-system/kube-proxy-jmrq8/kube-proxy" id=32676f52-8091-4f8c-8a58-b76c78c9788f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.312524896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.322818193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.325816111Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.352961805Z" level=info msg="Created container 2d70f4e5e265031ddfa7b6aaa045b8baaf04b6cb98743067ecbebea6d154b801: kube-system/kube-proxy-jmrq8/kube-proxy" id=32676f52-8091-4f8c-8a58-b76c78c9788f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.357142432Z" level=info msg="Starting container: 2d70f4e5e265031ddfa7b6aaa045b8baaf04b6cb98743067ecbebea6d154b801" id=df1cef95-8239-40b5-a05f-ac0a0e088343 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:02:37 newest-cni-714993 crio[841]: time="2025-11-21T15:02:37.37252223Z" level=info msg="Started container" PID=1520 containerID=2d70f4e5e265031ddfa7b6aaa045b8baaf04b6cb98743067ecbebea6d154b801 description=kube-system/kube-proxy-jmrq8/kube-proxy id=df1cef95-8239-40b5-a05f-ac0a0e088343 name=/runtime.v1.RuntimeService/StartContainer sandboxID=35d1522eb4f2d4aa46a34e77123232bce0ecde2d39a70d1c4bb65b9da6d46b1f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2d70f4e5e2650       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   35d1522eb4f2d       kube-proxy-jmrq8                            kube-system
	ecb1257b611b3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   e27d0714a45de       kindnet-jssq6                               kube-system
	3c2cd354f5652       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            0                   049ad5f151bb3       kube-apiserver-newest-cni-714993            kube-system
	669be3e155ed1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            0                   9811d55363063       kube-scheduler-newest-cni-714993            kube-system
	6ea67f62e2c6d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      0                   b5f5241adae1e       etcd-newest-cni-714993                      kube-system
	45eb53a2e2527       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   0                   0935f0bb33942       kube-controller-manager-newest-cni-714993   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-714993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-714993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=newest-cni-714993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T15_02_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 15:02:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-714993
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 15:02:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 15:02:31 +0000   Fri, 21 Nov 2025 15:02:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 15:02:31 +0000   Fri, 21 Nov 2025 15:02:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 15:02:31 +0000   Fri, 21 Nov 2025 15:02:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 21 Nov 2025 15:02:31 +0000   Fri, 21 Nov 2025 15:02:25 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-714993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                bc9159db-8195-45b8-b93a-134eb7c35db1
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-714993                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-jssq6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-714993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-714993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-jmrq8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-714993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 1s    kube-proxy       
	  Normal   Starting                 8s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s    kubelet          Node newest-cni-714993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-714993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s    kubelet          Node newest-cni-714993 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s    node-controller  Node newest-cni-714993 event: Registered Node newest-cni-714993 in Controller
	
	
	==> dmesg <==
	[Nov21 14:39] overlayfs: idmapped layers are currently not supported
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	[Nov21 14:58] overlayfs: idmapped layers are currently not supported
	[Nov21 14:59] overlayfs: idmapped layers are currently not supported
	[Nov21 15:00] overlayfs: idmapped layers are currently not supported
	[ +13.392744] overlayfs: idmapped layers are currently not supported
	[Nov21 15:01] overlayfs: idmapped layers are currently not supported
	[Nov21 15:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6ea67f62e2c6db32f9dcb4d9ba3a03c85dcf4363bce0c4d5905e4cbb82a94a0d] <==
	{"level":"warn","ts":"2025-11-21T15:02:27.021625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.053600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.059479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.084959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.102861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.116818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.138821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.174831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.175821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.213308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.248874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.308691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.311241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.337217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.393275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.396559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.418427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.443994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.480413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.490809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.517187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.546558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.570151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.586199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:27.701412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34936","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:02:39 up  2:45,  0 user,  load average: 4.87, 3.71, 2.89
	Linux newest-cni-714993 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ecb1257b611b329002a989ddba8e3f08db87fe5ca2f6bf0617e6e7c14d27f9db] <==
	I1121 15:02:37.408815       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 15:02:37.409464       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 15:02:37.413030       1 main.go:148] setting mtu 1500 for CNI 
	I1121 15:02:37.413053       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 15:02:37.413068       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T15:02:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 15:02:37.606134       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 15:02:37.606152       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 15:02:37.606160       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 15:02:37.606874       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [3c2cd354f56529f23b9cac377acc5c64db38d2b44d6a14c345da38dff59d5ef8] <==
	E1121 15:02:28.831505       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1121 15:02:28.834571       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 15:02:28.836489       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 15:02:28.893527       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 15:02:28.893529       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 15:02:28.912528       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 15:02:28.914349       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 15:02:29.043003       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 15:02:29.436067       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 15:02:29.441392       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 15:02:29.441423       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 15:02:30.342528       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 15:02:30.407413       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 15:02:30.545814       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 15:02:30.553553       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1121 15:02:30.554799       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 15:02:30.560506       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 15:02:30.606976       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 15:02:31.502691       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 15:02:31.519825       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 15:02:31.534648       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 15:02:35.620634       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 15:02:36.219575       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 15:02:36.226136       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 15:02:36.813960       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [45eb53a2e25277d9c462e03cba24136f5e1f979dffff833b2dd8af66b8a107dd] <==
	I1121 15:02:35.649346       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-714993" podCIDRs=["10.42.0.0/24"]
	I1121 15:02:35.654157       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:02:35.654239       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 15:02:35.654270       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 15:02:35.654393       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 15:02:35.654693       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 15:02:35.654776       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 15:02:35.673720       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 15:02:35.674035       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 15:02:35.674128       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 15:02:35.674175       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 15:02:35.674256       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 15:02:35.674473       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 15:02:35.674537       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 15:02:35.674619       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 15:02:35.674777       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 15:02:35.674867       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 15:02:35.676642       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 15:02:35.676733       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 15:02:35.676748       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 15:02:35.676802       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 15:02:35.676841       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 15:02:35.677407       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 15:02:35.677741       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:02:35.688507       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [2d70f4e5e265031ddfa7b6aaa045b8baaf04b6cb98743067ecbebea6d154b801] <==
	I1121 15:02:37.456740       1 server_linux.go:53] "Using iptables proxy"
	I1121 15:02:37.539003       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 15:02:37.643672       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 15:02:37.643718       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1121 15:02:37.643790       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 15:02:37.690818       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 15:02:37.690879       1 server_linux.go:132] "Using iptables Proxier"
	I1121 15:02:37.728852       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 15:02:37.729189       1 server.go:527] "Version info" version="v1.34.1"
	I1121 15:02:37.729205       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:02:37.730835       1 config.go:200] "Starting service config controller"
	I1121 15:02:37.730846       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 15:02:37.730862       1 config.go:106] "Starting endpoint slice config controller"
	I1121 15:02:37.730866       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 15:02:37.730877       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 15:02:37.730881       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 15:02:37.740251       1 config.go:309] "Starting node config controller"
	I1121 15:02:37.740267       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 15:02:37.740274       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 15:02:37.831463       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 15:02:37.831498       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 15:02:37.831545       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [669be3e155ed1ef0f7b681a933b68b8144258630ba6eb0f7cdce0bded573b873] <==
	I1121 15:02:29.057382       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 15:02:29.057606       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:02:29.057652       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:02:29.057694       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1121 15:02:29.071954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 15:02:29.075039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 15:02:29.075193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 15:02:29.075289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 15:02:29.075508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 15:02:29.075563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 15:02:29.075615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 15:02:29.075724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 15:02:29.075758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 15:02:29.075798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 15:02:29.075831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 15:02:29.075865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 15:02:29.075898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 15:02:29.075931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 15:02:29.075962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 15:02:29.077526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 15:02:29.077609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 15:02:29.077715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 15:02:29.077842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 15:02:30.026402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1121 15:02:32.958560       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 15:02:31 newest-cni-714993 kubelet[1331]: I1121 15:02:31.785120    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04578fa67dc5e4ce0b003b57a444d7d5-usr-local-share-ca-certificates\") pod \"kube-apiserver-newest-cni-714993\" (UID: \"04578fa67dc5e4ce0b003b57a444d7d5\") " pod="kube-system/kube-apiserver-newest-cni-714993"
	Nov 21 15:02:31 newest-cni-714993 kubelet[1331]: I1121 15:02:31.785139    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41b87353f7f6ce77d32f5e0633ae9f69-ca-certs\") pod \"kube-controller-manager-newest-cni-714993\" (UID: \"41b87353f7f6ce77d32f5e0633ae9f69\") " pod="kube-system/kube-controller-manager-newest-cni-714993"
	Nov 21 15:02:31 newest-cni-714993 kubelet[1331]: I1121 15:02:31.785161    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41b87353f7f6ce77d32f5e0633ae9f69-etc-ca-certificates\") pod \"kube-controller-manager-newest-cni-714993\" (UID: \"41b87353f7f6ce77d32f5e0633ae9f69\") " pod="kube-system/kube-controller-manager-newest-cni-714993"
	Nov 21 15:02:32 newest-cni-714993 kubelet[1331]: I1121 15:02:32.428166    1331 apiserver.go:52] "Watching apiserver"
	Nov 21 15:02:32 newest-cni-714993 kubelet[1331]: I1121 15:02:32.484151    1331 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 21 15:02:32 newest-cni-714993 kubelet[1331]: I1121 15:02:32.561389    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-714993" podStartSLOduration=2.561368409 podStartE2EDuration="2.561368409s" podCreationTimestamp="2025-11-21 15:02:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:02:32.535071216 +0000 UTC m=+1.188560329" watchObservedRunningTime="2025-11-21 15:02:32.561368409 +0000 UTC m=+1.214857522"
	Nov 21 15:02:32 newest-cni-714993 kubelet[1331]: I1121 15:02:32.585971    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-714993" podStartSLOduration=1.585952121 podStartE2EDuration="1.585952121s" podCreationTimestamp="2025-11-21 15:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:02:32.561841136 +0000 UTC m=+1.215330241" watchObservedRunningTime="2025-11-21 15:02:32.585952121 +0000 UTC m=+1.239441234"
	Nov 21 15:02:32 newest-cni-714993 kubelet[1331]: I1121 15:02:32.595901    1331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-714993"
	Nov 21 15:02:32 newest-cni-714993 kubelet[1331]: I1121 15:02:32.620132    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-714993" podStartSLOduration=2.620101302 podStartE2EDuration="2.620101302s" podCreationTimestamp="2025-11-21 15:02:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:02:32.586348023 +0000 UTC m=+1.239837128" watchObservedRunningTime="2025-11-21 15:02:32.620101302 +0000 UTC m=+1.273590407"
	Nov 21 15:02:32 newest-cni-714993 kubelet[1331]: I1121 15:02:32.620247    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-714993" podStartSLOduration=3.6202425959999998 podStartE2EDuration="3.620242596s" podCreationTimestamp="2025-11-21 15:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:02:32.619771452 +0000 UTC m=+1.273260565" watchObservedRunningTime="2025-11-21 15:02:32.620242596 +0000 UTC m=+1.273731717"
	Nov 21 15:02:32 newest-cni-714993 kubelet[1331]: E1121 15:02:32.629140    1331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-714993\" already exists" pod="kube-system/kube-apiserver-newest-cni-714993"
	Nov 21 15:02:35 newest-cni-714993 kubelet[1331]: I1121 15:02:35.739722    1331 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 21 15:02:35 newest-cni-714993 kubelet[1331]: I1121 15:02:35.740871    1331 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 21 15:02:36 newest-cni-714993 kubelet[1331]: I1121 15:02:36.927607    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da7ab922-ecf7-449c-9aac-481926be6add-lib-modules\") pod \"kindnet-jssq6\" (UID: \"da7ab922-ecf7-449c-9aac-481926be6add\") " pod="kube-system/kindnet-jssq6"
	Nov 21 15:02:36 newest-cni-714993 kubelet[1331]: I1121 15:02:36.927666    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da7ab922-ecf7-449c-9aac-481926be6add-xtables-lock\") pod \"kindnet-jssq6\" (UID: \"da7ab922-ecf7-449c-9aac-481926be6add\") " pod="kube-system/kindnet-jssq6"
	Nov 21 15:02:36 newest-cni-714993 kubelet[1331]: I1121 15:02:36.927687    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/da7ab922-ecf7-449c-9aac-481926be6add-cni-cfg\") pod \"kindnet-jssq6\" (UID: \"da7ab922-ecf7-449c-9aac-481926be6add\") " pod="kube-system/kindnet-jssq6"
	Nov 21 15:02:36 newest-cni-714993 kubelet[1331]: I1121 15:02:36.927709    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26wbh\" (UniqueName: \"kubernetes.io/projected/da7ab922-ecf7-449c-9aac-481926be6add-kube-api-access-26wbh\") pod \"kindnet-jssq6\" (UID: \"da7ab922-ecf7-449c-9aac-481926be6add\") " pod="kube-system/kindnet-jssq6"
	Nov 21 15:02:37 newest-cni-714993 kubelet[1331]: I1121 15:02:37.028247    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/153afdbe-a8ec-43f0-a76c-4b9c81867c6e-kube-proxy\") pod \"kube-proxy-jmrq8\" (UID: \"153afdbe-a8ec-43f0-a76c-4b9c81867c6e\") " pod="kube-system/kube-proxy-jmrq8"
	Nov 21 15:02:37 newest-cni-714993 kubelet[1331]: I1121 15:02:37.028326    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xspr5\" (UniqueName: \"kubernetes.io/projected/153afdbe-a8ec-43f0-a76c-4b9c81867c6e-kube-api-access-xspr5\") pod \"kube-proxy-jmrq8\" (UID: \"153afdbe-a8ec-43f0-a76c-4b9c81867c6e\") " pod="kube-system/kube-proxy-jmrq8"
	Nov 21 15:02:37 newest-cni-714993 kubelet[1331]: I1121 15:02:37.028413    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/153afdbe-a8ec-43f0-a76c-4b9c81867c6e-xtables-lock\") pod \"kube-proxy-jmrq8\" (UID: \"153afdbe-a8ec-43f0-a76c-4b9c81867c6e\") " pod="kube-system/kube-proxy-jmrq8"
	Nov 21 15:02:37 newest-cni-714993 kubelet[1331]: I1121 15:02:37.028433    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/153afdbe-a8ec-43f0-a76c-4b9c81867c6e-lib-modules\") pod \"kube-proxy-jmrq8\" (UID: \"153afdbe-a8ec-43f0-a76c-4b9c81867c6e\") " pod="kube-system/kube-proxy-jmrq8"
	Nov 21 15:02:37 newest-cni-714993 kubelet[1331]: I1121 15:02:37.071718    1331 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 21 15:02:37 newest-cni-714993 kubelet[1331]: W1121 15:02:37.290398    1331 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/crio-35d1522eb4f2d4aa46a34e77123232bce0ecde2d39a70d1c4bb65b9da6d46b1f WatchSource:0}: Error finding container 35d1522eb4f2d4aa46a34e77123232bce0ecde2d39a70d1c4bb65b9da6d46b1f: Status 404 returned error can't find the container with id 35d1522eb4f2d4aa46a34e77123232bce0ecde2d39a70d1c4bb65b9da6d46b1f
	Nov 21 15:02:37 newest-cni-714993 kubelet[1331]: I1121 15:02:37.665132    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jssq6" podStartSLOduration=1.6651130410000001 podStartE2EDuration="1.665113041s" podCreationTimestamp="2025-11-21 15:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:02:37.664994566 +0000 UTC m=+6.318483679" watchObservedRunningTime="2025-11-21 15:02:37.665113041 +0000 UTC m=+6.318602146"
	Nov 21 15:02:37 newest-cni-714993 kubelet[1331]: I1121 15:02:37.665250    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jmrq8" podStartSLOduration=1.6652437660000001 podStartE2EDuration="1.665243766s" podCreationTimestamp="2025-11-21 15:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:02:37.63861199 +0000 UTC m=+6.292101095" watchObservedRunningTime="2025-11-21 15:02:37.665243766 +0000 UTC m=+6.318732896"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-714993 -n newest-cni-714993
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-714993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gg7hh storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-714993 describe pod coredns-66bc5c9577-gg7hh storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-714993 describe pod coredns-66bc5c9577-gg7hh storage-provisioner: exit status 1 (89.405004ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gg7hh" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-714993 describe pod coredns-66bc5c9577-gg7hh storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-714993 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-714993 --alsologtostderr -v=1: exit status 80 (2.387261738s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-714993 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 15:02:59.032963  497923 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:02:59.033146  497923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:02:59.033159  497923 out.go:374] Setting ErrFile to fd 2...
	I1121 15:02:59.033165  497923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:02:59.033439  497923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:02:59.033695  497923 out.go:368] Setting JSON to false
	I1121 15:02:59.033723  497923 mustload.go:66] Loading cluster: newest-cni-714993
	I1121 15:02:59.034113  497923 config.go:182] Loaded profile config "newest-cni-714993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:02:59.034729  497923 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:59.067969  497923 host.go:66] Checking if "newest-cni-714993" exists ...
	I1121 15:02:59.068306  497923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:02:59.168493  497923 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 15:02:59.158066464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:02:59.169125  497923 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-714993 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1121 15:02:59.172492  497923 out.go:179] * Pausing node newest-cni-714993 ... 
	I1121 15:02:59.175241  497923 host.go:66] Checking if "newest-cni-714993" exists ...
	I1121 15:02:59.175601  497923 ssh_runner.go:195] Run: systemctl --version
	I1121 15:02:59.175644  497923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:59.197231  497923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:59.304553  497923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:02:59.319525  497923 pause.go:52] kubelet running: true
	I1121 15:02:59.319596  497923 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 15:02:59.545047  497923 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 15:02:59.545151  497923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 15:02:59.667076  497923 cri.go:89] found id: "4358b82c8954ff00c750a8e7c797bb0f9e4326b91ea2d2a26f1f41c7d11f898e"
	I1121 15:02:59.667097  497923 cri.go:89] found id: "91c8c5d6351174940402e2f7125bc3576786fa4abc2808b9d5dfc2a6dce40f72"
	I1121 15:02:59.667102  497923 cri.go:89] found id: "2edf2c715c49a8ea3535cad8175de2b076e3defaf18e6a36a9e7d31008d89625"
	I1121 15:02:59.667106  497923 cri.go:89] found id: "730f5e074ea29c84ee762fa289c70402bf32780d481aaa7682e731dd9794d540"
	I1121 15:02:59.667109  497923 cri.go:89] found id: "a66dde0c760949de4963a031792226b729cb39589bf9b5e48c1f90fc16d85523"
	I1121 15:02:59.667126  497923 cri.go:89] found id: "6e6566081c03aa51453671dda29548d283a3156fd32976da87a2e0708b5ca23e"
	I1121 15:02:59.667131  497923 cri.go:89] found id: ""
	I1121 15:02:59.667182  497923 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 15:02:59.682486  497923 retry.go:31] will retry after 282.252659ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:02:59Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:02:59.965833  497923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:02:59.979083  497923 pause.go:52] kubelet running: false
	I1121 15:02:59.979155  497923 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 15:03:00.497000  497923 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 15:03:00.497112  497923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 15:03:00.733849  497923 cri.go:89] found id: "4358b82c8954ff00c750a8e7c797bb0f9e4326b91ea2d2a26f1f41c7d11f898e"
	I1121 15:03:00.733869  497923 cri.go:89] found id: "91c8c5d6351174940402e2f7125bc3576786fa4abc2808b9d5dfc2a6dce40f72"
	I1121 15:03:00.733874  497923 cri.go:89] found id: "2edf2c715c49a8ea3535cad8175de2b076e3defaf18e6a36a9e7d31008d89625"
	I1121 15:03:00.733877  497923 cri.go:89] found id: "730f5e074ea29c84ee762fa289c70402bf32780d481aaa7682e731dd9794d540"
	I1121 15:03:00.733880  497923 cri.go:89] found id: "a66dde0c760949de4963a031792226b729cb39589bf9b5e48c1f90fc16d85523"
	I1121 15:03:00.733884  497923 cri.go:89] found id: "6e6566081c03aa51453671dda29548d283a3156fd32976da87a2e0708b5ca23e"
	I1121 15:03:00.733887  497923 cri.go:89] found id: ""
	I1121 15:03:00.733935  497923 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 15:03:00.749007  497923 retry.go:31] will retry after 306.380884ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:03:00Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:03:01.056529  497923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:03:01.069664  497923 pause.go:52] kubelet running: false
	I1121 15:03:01.069730  497923 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 15:03:01.214547  497923 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 15:03:01.214629  497923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 15:03:01.310286  497923 cri.go:89] found id: "4358b82c8954ff00c750a8e7c797bb0f9e4326b91ea2d2a26f1f41c7d11f898e"
	I1121 15:03:01.310312  497923 cri.go:89] found id: "91c8c5d6351174940402e2f7125bc3576786fa4abc2808b9d5dfc2a6dce40f72"
	I1121 15:03:01.310317  497923 cri.go:89] found id: "2edf2c715c49a8ea3535cad8175de2b076e3defaf18e6a36a9e7d31008d89625"
	I1121 15:03:01.310321  497923 cri.go:89] found id: "730f5e074ea29c84ee762fa289c70402bf32780d481aaa7682e731dd9794d540"
	I1121 15:03:01.310324  497923 cri.go:89] found id: "a66dde0c760949de4963a031792226b729cb39589bf9b5e48c1f90fc16d85523"
	I1121 15:03:01.310328  497923 cri.go:89] found id: "6e6566081c03aa51453671dda29548d283a3156fd32976da87a2e0708b5ca23e"
	I1121 15:03:01.310331  497923 cri.go:89] found id: ""
	I1121 15:03:01.310385  497923 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 15:03:01.327471  497923 out.go:203] 
	W1121 15:03:01.330485  497923 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:03:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:03:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 15:03:01.330513  497923 out.go:285] * 
	* 
	W1121 15:03:01.336170  497923 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 15:03:01.339126  497923 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-714993 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-714993
helpers_test.go:243: (dbg) docker inspect newest-cni-714993:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a",
	        "Created": "2025-11-21T15:02:00.230610086Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T15:02:42.684711688Z",
	            "FinishedAt": "2025-11-21T15:02:41.627676029Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/hostname",
	        "HostsPath": "/var/lib/docker/containers/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/hosts",
	        "LogPath": "/var/lib/docker/containers/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a-json.log",
	        "Name": "/newest-cni-714993",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-714993:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-714993",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a",
	                "LowerDir": "/var/lib/docker/overlay2/0f94cb91fca9e0d93d6363f98feac79ea3c7a145b555492488266c975a6945f1-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f94cb91fca9e0d93d6363f98feac79ea3c7a145b555492488266c975a6945f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f94cb91fca9e0d93d6363f98feac79ea3c7a145b555492488266c975a6945f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f94cb91fca9e0d93d6363f98feac79ea3c7a145b555492488266c975a6945f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-714993",
	                "Source": "/var/lib/docker/volumes/newest-cni-714993/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-714993",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-714993",
	                "name.minikube.sigs.k8s.io": "newest-cni-714993",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0fb16c59c58f08c0514c75b8947cacc116f597fd50608cf63e7d50eb45655083",
	            "SandboxKey": "/var/run/docker/netns/0fb16c59c58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-714993": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:db:e5:67:23:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1cfea5893685cdf8198d74a4f99484841fa068338f22db34f688b7b58b6435e9",
	                    "EndpointID": "59171f746596d7d8919058cb99b2f9c27cf5ddbc1266ee34b098ae0036b53fbb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-714993",
	                        "bc5829e976c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-714993 -n newest-cni-714993
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-714993 -n newest-cni-714993: exit status 2 (364.025749ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-714993 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-714993 logs -n 25: (1.202481927s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-844780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ stop    │ -p no-preload-844780 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ addons  │ enable metrics-server -p embed-certs-902161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-844780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ stop    │ -p embed-certs-902161 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-902161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ image   │ no-preload-844780 image list --format=json                                                                                                                                                                                                    │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p no-preload-844780 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:02 UTC │
	│ image   │ embed-certs-902161 image list --format=json                                                                                                                                                                                                   │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p embed-certs-902161 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p embed-certs-902161                                                                                                                                                                                                                         │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p embed-certs-902161                                                                                                                                                                                                                         │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:02 UTC │
	│ addons  │ enable metrics-server -p newest-cni-714993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │                     │
	│ stop    │ -p newest-cni-714993 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-714993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ start   │ -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ image   │ newest-cni-714993 image list --format=json                                                                                                                                                                                                    │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ pause   │ -p newest-cni-714993 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 15:02:42
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 15:02:42.390744  496079 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:02:42.390917  496079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:02:42.390929  496079 out.go:374] Setting ErrFile to fd 2...
	I1121 15:02:42.390934  496079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:02:42.391226  496079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:02:42.391661  496079 out.go:368] Setting JSON to false
	I1121 15:02:42.392805  496079 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9914,"bootTime":1763727448,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 15:02:42.392885  496079 start.go:143] virtualization:  
	I1121 15:02:42.396085  496079 out.go:179] * [newest-cni-714993] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 15:02:42.400238  496079 notify.go:221] Checking for updates...
	I1121 15:02:42.401144  496079 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 15:02:42.404265  496079 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 15:02:42.407545  496079 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:02:42.410461  496079 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 15:02:42.413463  496079 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 15:02:42.416275  496079 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 15:02:42.419551  496079 config.go:182] Loaded profile config "newest-cni-714993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:02:42.420171  496079 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 15:02:42.446989  496079 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 15:02:42.447128  496079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:02:42.522186  496079 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 15:02:42.511863031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:02:42.522295  496079 docker.go:319] overlay module found
	I1121 15:02:42.525383  496079 out.go:179] * Using the docker driver based on existing profile
	I1121 15:02:42.528304  496079 start.go:309] selected driver: docker
	I1121 15:02:42.528331  496079 start.go:930] validating driver "docker" against &{Name:newest-cni-714993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:02:42.528478  496079 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 15:02:42.529347  496079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:02:42.595215  496079 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 15:02:42.586032197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:02:42.595553  496079 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 15:02:42.595588  496079 cni.go:84] Creating CNI manager for ""
	I1121 15:02:42.595649  496079 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:02:42.595693  496079 start.go:353] cluster config:
	{Name:newest-cni-714993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:02:42.600974  496079 out.go:179] * Starting "newest-cni-714993" primary control-plane node in "newest-cni-714993" cluster
	I1121 15:02:42.603877  496079 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 15:02:42.606705  496079 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 15:02:42.609556  496079 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:02:42.609613  496079 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 15:02:42.609639  496079 cache.go:65] Caching tarball of preloaded images
	I1121 15:02:42.609636  496079 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 15:02:42.609723  496079 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 15:02:42.609733  496079 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 15:02:42.609850  496079 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/config.json ...
	I1121 15:02:42.629923  496079 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 15:02:42.629951  496079 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 15:02:42.629964  496079 cache.go:243] Successfully downloaded all kic artifacts
	I1121 15:02:42.629987  496079 start.go:360] acquireMachinesLock for newest-cni-714993: {Name:mk4fe5ba68b949796f6324fdcc6a0615ddd88762 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 15:02:42.630044  496079 start.go:364] duration metric: took 38.68µs to acquireMachinesLock for "newest-cni-714993"
	I1121 15:02:42.630075  496079 start.go:96] Skipping create...Using existing machine configuration
	I1121 15:02:42.630084  496079 fix.go:54] fixHost starting: 
	I1121 15:02:42.630389  496079 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:42.648865  496079 fix.go:112] recreateIfNeeded on newest-cni-714993: state=Stopped err=<nil>
	W1121 15:02:42.648899  496079 fix.go:138] unexpected machine state, will restart: <nil>
	W1121 15:02:40.212198  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	W1121 15:02:42.213324  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	I1121 15:02:42.652107  496079 out.go:252] * Restarting existing docker container for "newest-cni-714993" ...
	I1121 15:02:42.652187  496079 cli_runner.go:164] Run: docker start newest-cni-714993
	I1121 15:02:42.915196  496079 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:42.943713  496079 kic.go:430] container "newest-cni-714993" state is running.
	I1121 15:02:42.945898  496079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-714993
	I1121 15:02:42.967257  496079 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/config.json ...
	I1121 15:02:42.967488  496079 machine.go:94] provisionDockerMachine start ...
	I1121 15:02:42.967547  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:42.993317  496079 main.go:143] libmachine: Using SSH client type: native
	I1121 15:02:42.993641  496079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1121 15:02:42.993650  496079 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 15:02:42.995557  496079 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51664->127.0.0.1:33458: read: connection reset by peer
	I1121 15:02:46.148592  496079 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-714993
	
	I1121 15:02:46.148617  496079 ubuntu.go:182] provisioning hostname "newest-cni-714993"
	I1121 15:02:46.148681  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:46.171842  496079 main.go:143] libmachine: Using SSH client type: native
	I1121 15:02:46.172161  496079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1121 15:02:46.172173  496079 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-714993 && echo "newest-cni-714993" | sudo tee /etc/hostname
	I1121 15:02:46.332036  496079 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-714993
	
	I1121 15:02:46.332134  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:46.350244  496079 main.go:143] libmachine: Using SSH client type: native
	I1121 15:02:46.350544  496079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1121 15:02:46.350576  496079 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-714993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-714993/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-714993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 15:02:46.492742  496079 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 15:02:46.492768  496079 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 15:02:46.492794  496079 ubuntu.go:190] setting up certificates
	I1121 15:02:46.492805  496079 provision.go:84] configureAuth start
	I1121 15:02:46.492866  496079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-714993
	I1121 15:02:46.516424  496079 provision.go:143] copyHostCerts
	I1121 15:02:46.516505  496079 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 15:02:46.516529  496079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 15:02:46.516609  496079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 15:02:46.516709  496079 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 15:02:46.516726  496079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 15:02:46.516754  496079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 15:02:46.516812  496079 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 15:02:46.516820  496079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 15:02:46.516843  496079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 15:02:46.516894  496079 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.newest-cni-714993 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-714993]
	I1121 15:02:47.056112  496079 provision.go:177] copyRemoteCerts
	I1121 15:02:47.056188  496079 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 15:02:47.056234  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:47.074880  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:47.184737  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 15:02:47.204120  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 15:02:47.227078  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 15:02:47.254218  496079 provision.go:87] duration metric: took 761.398483ms to configureAuth
	I1121 15:02:47.254247  496079 ubuntu.go:206] setting minikube options for container-runtime
	I1121 15:02:47.254497  496079 config.go:182] Loaded profile config "newest-cni-714993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:02:47.254646  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:47.273220  496079 main.go:143] libmachine: Using SSH client type: native
	I1121 15:02:47.273526  496079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1121 15:02:47.273556  496079 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 15:02:47.603481  496079 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 15:02:47.603546  496079 machine.go:97] duration metric: took 4.636045882s to provisionDockerMachine
	I1121 15:02:47.603562  496079 start.go:293] postStartSetup for "newest-cni-714993" (driver="docker")
	I1121 15:02:47.603574  496079 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 15:02:47.603654  496079 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 15:02:47.603702  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:47.622002  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:47.729434  496079 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 15:02:47.733186  496079 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 15:02:47.733217  496079 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 15:02:47.733246  496079 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 15:02:47.733333  496079 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 15:02:47.733502  496079 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 15:02:47.733681  496079 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 15:02:47.742577  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:02:47.764276  496079 start.go:296] duration metric: took 160.697434ms for postStartSetup
	I1121 15:02:47.764419  496079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 15:02:47.764464  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:47.782075  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:47.881690  496079 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 15:02:47.886594  496079 fix.go:56] duration metric: took 5.256501377s for fixHost
	I1121 15:02:47.886621  496079 start.go:83] releasing machines lock for "newest-cni-714993", held for 5.256562786s
	I1121 15:02:47.886685  496079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-714993
	I1121 15:02:47.903454  496079 ssh_runner.go:195] Run: cat /version.json
	I1121 15:02:47.903510  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:47.903649  496079 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 15:02:47.903703  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:47.924815  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:47.938428  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:48.024801  496079 ssh_runner.go:195] Run: systemctl --version
	I1121 15:02:48.127467  496079 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 15:02:48.164336  496079 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 15:02:48.169049  496079 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 15:02:48.169135  496079 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 15:02:48.178845  496079 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 15:02:48.178867  496079 start.go:496] detecting cgroup driver to use...
	I1121 15:02:48.178900  496079 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 15:02:48.178962  496079 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 15:02:48.194859  496079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 15:02:48.213524  496079 docker.go:218] disabling cri-docker service (if available) ...
	I1121 15:02:48.213610  496079 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 15:02:48.229437  496079 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 15:02:48.244241  496079 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 15:02:48.378061  496079 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 15:02:48.499991  496079 docker.go:234] disabling docker service ...
	I1121 15:02:48.500081  496079 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 15:02:48.517060  496079 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 15:02:48.535746  496079 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 15:02:48.657717  496079 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 15:02:48.787732  496079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 15:02:48.802604  496079 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 15:02:48.816743  496079 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 15:02:48.816859  496079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.826770  496079 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 15:02:48.826914  496079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.836458  496079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.846662  496079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.855700  496079 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 15:02:48.864154  496079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.873750  496079 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.882448  496079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.891141  496079 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 15:02:48.899377  496079 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 15:02:48.907383  496079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:02:49.027459  496079 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 15:02:49.203884  496079 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 15:02:49.204007  496079 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 15:02:49.208624  496079 start.go:564] Will wait 60s for crictl version
	I1121 15:02:49.208739  496079 ssh_runner.go:195] Run: which crictl
	I1121 15:02:49.213274  496079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 15:02:49.243703  496079 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 15:02:49.243793  496079 ssh_runner.go:195] Run: crio --version
	I1121 15:02:49.277206  496079 ssh_runner.go:195] Run: crio --version
	I1121 15:02:49.328716  496079 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 15:02:49.331606  496079 cli_runner.go:164] Run: docker network inspect newest-cni-714993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 15:02:49.350008  496079 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 15:02:49.353884  496079 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:02:49.366483  496079 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1121 15:02:44.710629  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	W1121 15:02:46.711463  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	W1121 15:02:49.211292  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	I1121 15:02:49.369308  496079 kubeadm.go:884] updating cluster {Name:newest-cni-714993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 15:02:49.369460  496079 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:02:49.369533  496079 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:02:49.405700  496079 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:02:49.405726  496079 crio.go:433] Images already preloaded, skipping extraction
	I1121 15:02:49.405791  496079 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:02:49.431974  496079 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:02:49.432001  496079 cache_images.go:86] Images are preloaded, skipping loading
	I1121 15:02:49.432009  496079 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1121 15:02:49.432114  496079 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-714993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 15:02:49.432203  496079 ssh_runner.go:195] Run: crio config
	I1121 15:02:49.490938  496079 cni.go:84] Creating CNI manager for ""
	I1121 15:02:49.490963  496079 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:02:49.490980  496079 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1121 15:02:49.491003  496079 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-714993 NodeName:newest-cni-714993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 15:02:49.491148  496079 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-714993"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 15:02:49.491221  496079 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 15:02:49.498994  496079 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 15:02:49.499062  496079 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 15:02:49.506800  496079 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1121 15:02:49.520306  496079 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 15:02:49.533712  496079 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1121 15:02:49.546858  496079 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 15:02:49.551381  496079 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:02:49.561367  496079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:02:49.685924  496079 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:02:49.701694  496079 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993 for IP: 192.168.76.2
	I1121 15:02:49.701768  496079 certs.go:195] generating shared ca certs ...
	I1121 15:02:49.701800  496079 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:49.701999  496079 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 15:02:49.702064  496079 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 15:02:49.702107  496079 certs.go:257] generating profile certs ...
	I1121 15:02:49.702266  496079 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/client.key
	I1121 15:02:49.702377  496079 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.key.90646b61
	I1121 15:02:49.702456  496079 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.key
	I1121 15:02:49.702627  496079 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 15:02:49.702690  496079 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 15:02:49.702716  496079 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 15:02:49.702775  496079 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 15:02:49.702835  496079 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 15:02:49.702882  496079 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 15:02:49.702958  496079 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:02:49.703800  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 15:02:49.727104  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 15:02:49.747558  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 15:02:49.768916  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 15:02:49.793932  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 15:02:49.817553  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 15:02:49.840169  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 15:02:49.863524  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 15:02:49.886827  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 15:02:49.912104  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 15:02:49.938650  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 15:02:49.961725  496079 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 15:02:49.975838  496079 ssh_runner.go:195] Run: openssl version
	I1121 15:02:49.983766  496079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 15:02:49.994392  496079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:02:49.998273  496079 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:02:49.998344  496079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:02:50.044482  496079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 15:02:50.053859  496079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 15:02:50.062982  496079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 15:02:50.067176  496079 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 15:02:50.067315  496079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 15:02:50.109852  496079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 15:02:50.118270  496079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 15:02:50.127502  496079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 15:02:50.131588  496079 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 15:02:50.131663  496079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 15:02:50.174377  496079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 15:02:50.183598  496079 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 15:02:50.187934  496079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 15:02:50.229550  496079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 15:02:50.277404  496079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 15:02:50.323056  496079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 15:02:50.403921  496079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 15:02:50.503153  496079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 15:02:50.623174  496079 kubeadm.go:401] StartCluster: {Name:newest-cni-714993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:02:50.623380  496079 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 15:02:50.623508  496079 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 15:02:50.678529  496079 cri.go:89] found id: "2edf2c715c49a8ea3535cad8175de2b076e3defaf18e6a36a9e7d31008d89625"
	I1121 15:02:50.678629  496079 cri.go:89] found id: "730f5e074ea29c84ee762fa289c70402bf32780d481aaa7682e731dd9794d540"
	I1121 15:02:50.678649  496079 cri.go:89] found id: "a66dde0c760949de4963a031792226b729cb39589bf9b5e48c1f90fc16d85523"
	I1121 15:02:50.678697  496079 cri.go:89] found id: "6e6566081c03aa51453671dda29548d283a3156fd32976da87a2e0708b5ca23e"
	I1121 15:02:50.678721  496079 cri.go:89] found id: ""
	I1121 15:02:50.678831  496079 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 15:02:50.703062  496079 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:02:50Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:02:50.703240  496079 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 15:02:50.715059  496079 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 15:02:50.715079  496079 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 15:02:50.715134  496079 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 15:02:50.733115  496079 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 15:02:50.733897  496079 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-714993" does not appear in /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:02:50.734306  496079 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-289204/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-714993" cluster setting kubeconfig missing "newest-cni-714993" context setting]
	I1121 15:02:50.734918  496079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:50.736740  496079 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 15:02:50.745366  496079 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1121 15:02:50.745448  496079 kubeadm.go:602] duration metric: took 30.362158ms to restartPrimaryControlPlane
	I1121 15:02:50.745472  496079 kubeadm.go:403] duration metric: took 122.307891ms to StartCluster
	I1121 15:02:50.745515  496079 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:50.745606  496079 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:02:50.746612  496079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:50.746912  496079 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:02:50.747472  496079 config.go:182] Loaded profile config "newest-cni-714993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:02:50.747423  496079 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 15:02:50.747679  496079 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-714993"
	I1121 15:02:50.747721  496079 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-714993"
	W1121 15:02:50.747743  496079 addons.go:248] addon storage-provisioner should already be in state true
	I1121 15:02:50.747829  496079 host.go:66] Checking if "newest-cni-714993" exists ...
	I1121 15:02:50.748451  496079 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:50.748657  496079 addons.go:70] Setting dashboard=true in profile "newest-cni-714993"
	I1121 15:02:50.748701  496079 addons.go:239] Setting addon dashboard=true in "newest-cni-714993"
	W1121 15:02:50.748722  496079 addons.go:248] addon dashboard should already be in state true
	I1121 15:02:50.748805  496079 host.go:66] Checking if "newest-cni-714993" exists ...
	I1121 15:02:50.748961  496079 addons.go:70] Setting default-storageclass=true in profile "newest-cni-714993"
	I1121 15:02:50.748993  496079 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-714993"
	I1121 15:02:50.749278  496079 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:50.749377  496079 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:50.755451  496079 out.go:179] * Verifying Kubernetes components...
	I1121 15:02:50.758751  496079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:02:50.799096  496079 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 15:02:50.802525  496079 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1121 15:02:50.808522  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 15:02:50.808552  496079 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 15:02:50.808694  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:50.812000  496079 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 15:02:50.818481  496079 addons.go:239] Setting addon default-storageclass=true in "newest-cni-714993"
	W1121 15:02:50.818512  496079 addons.go:248] addon default-storageclass should already be in state true
	I1121 15:02:50.818554  496079 host.go:66] Checking if "newest-cni-714993" exists ...
	I1121 15:02:50.819084  496079 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:50.822097  496079 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:02:50.822141  496079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 15:02:50.822209  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:50.874684  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:50.875518  496079 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 15:02:50.875535  496079 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 15:02:50.875786  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:50.882510  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:50.909181  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:51.176297  496079 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:02:51.185545  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 15:02:51.185637  496079 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 15:02:51.205710  496079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 15:02:51.231325  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 15:02:51.231411  496079 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 15:02:51.238398  496079 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:02:51.238474  496079 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:02:51.268184  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 15:02:51.268210  496079 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 15:02:51.295505  496079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:02:51.354217  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 15:02:51.354241  496079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 15:02:51.404607  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 15:02:51.404634  496079 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 15:02:51.483045  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 15:02:51.483065  496079 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 15:02:51.569523  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 15:02:51.569549  496079 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 15:02:51.615112  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 15:02:51.615137  496079 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 15:02:51.649666  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 15:02:51.649690  496079 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 15:02:51.668234  496079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1121 15:02:51.716272  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	W1121 15:02:54.212012  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	I1121 15:02:56.113453  496079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.907650842s)
	I1121 15:02:56.113809  496079 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.875316755s)
	I1121 15:02:56.113837  496079 api_server.go:72] duration metric: took 5.366878137s to wait for apiserver process to appear ...
	I1121 15:02:56.113847  496079 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:02:56.113860  496079 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:02:56.300639  496079 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 15:02:56.300674  496079 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 15:02:56.614268  496079 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:02:56.625340  496079 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 15:02:56.625375  496079 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 15:02:57.114692  496079 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:02:57.123634  496079 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 15:02:57.123664  496079 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 15:02:57.614484  496079 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:02:57.664717  496079 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 15:02:57.664824  496079 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 15:02:57.844307  496079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.548765867s)
	I1121 15:02:57.844446  496079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.17618026s)
	I1121 15:02:57.847552  496079 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-714993 addons enable metrics-server
	
	I1121 15:02:57.850468  496079 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1121 15:02:57.853345  496079 addons.go:530] duration metric: took 7.105909279s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1121 15:02:58.113985  496079 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:02:58.126506  496079 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 15:02:58.128017  496079 api_server.go:141] control plane version: v1.34.1
	I1121 15:02:58.128040  496079 api_server.go:131] duration metric: took 2.014186716s to wait for apiserver health ...
	I1121 15:02:58.128049  496079 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:02:58.135199  496079 system_pods.go:59] 8 kube-system pods found
	I1121 15:02:58.135236  496079 system_pods.go:61] "coredns-66bc5c9577-gg7hh" [9870c48c-8548-4838-8cb4-9174010fdcd0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 15:02:58.135247  496079 system_pods.go:61] "etcd-newest-cni-714993" [c62fc121-98a5-4101-a4f7-b563520e09a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:02:58.135255  496079 system_pods.go:61] "kindnet-jssq6" [da7ab922-ecf7-449c-9aac-481926be6add] Running
	I1121 15:02:58.135262  496079 system_pods.go:61] "kube-apiserver-newest-cni-714993" [a9d0ff40-44af-4f7c-beed-5c0b8061b718] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 15:02:58.135268  496079 system_pods.go:61] "kube-controller-manager-newest-cni-714993" [f3df974c-7a27-41a2-aaae-664da491b689] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 15:02:58.135273  496079 system_pods.go:61] "kube-proxy-jmrq8" [153afdbe-a8ec-43f0-a76c-4b9c81867c6e] Running
	I1121 15:02:58.135279  496079 system_pods.go:61] "kube-scheduler-newest-cni-714993" [257585ec-0c18-4a96-a45d-d924cf069dff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:02:58.135285  496079 system_pods.go:61] "storage-provisioner" [36238ceb-8620-4b93-9a0a-f802b27e8c16] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 15:02:58.135291  496079 system_pods.go:74] duration metric: took 7.236121ms to wait for pod list to return data ...
	I1121 15:02:58.135300  496079 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:02:58.140155  496079 default_sa.go:45] found service account: "default"
	I1121 15:02:58.140178  496079 default_sa.go:55] duration metric: took 4.872959ms for default service account to be created ...
	I1121 15:02:58.140191  496079 kubeadm.go:587] duration metric: took 7.393230073s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 15:02:58.140209  496079 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:02:58.143876  496079 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:02:58.143961  496079 node_conditions.go:123] node cpu capacity is 2
	I1121 15:02:58.143989  496079 node_conditions.go:105] duration metric: took 3.774329ms to run NodePressure ...
	I1121 15:02:58.144032  496079 start.go:242] waiting for startup goroutines ...
	I1121 15:02:58.144055  496079 start.go:247] waiting for cluster config update ...
	I1121 15:02:58.144079  496079 start.go:256] writing updated cluster config ...
	I1121 15:02:58.144429  496079 ssh_runner.go:195] Run: rm -f paused
	I1121 15:02:58.226616  496079 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 15:02:58.230233  496079 out.go:179] * Done! kubectl is now configured to use "newest-cni-714993" cluster and "default" namespace by default
	W1121 15:02:56.709769  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	I1121 15:02:57.209883  489211 node_ready.go:49] node "default-k8s-diff-port-124330" is "Ready"
	I1121 15:02:57.209907  489211 node_ready.go:38] duration metric: took 39.502764026s for node "default-k8s-diff-port-124330" to be "Ready" ...
	I1121 15:02:57.209922  489211 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:02:57.209981  489211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:02:57.226304  489211 api_server.go:72] duration metric: took 41.578586729s to wait for apiserver process to appear ...
	I1121 15:02:57.226325  489211 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:02:57.226343  489211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1121 15:02:57.236434  489211 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1121 15:02:57.239252  489211 api_server.go:141] control plane version: v1.34.1
	I1121 15:02:57.239280  489211 api_server.go:131] duration metric: took 12.947613ms to wait for apiserver health ...
	I1121 15:02:57.239289  489211 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:02:57.244962  489211 system_pods.go:59] 8 kube-system pods found
	I1121 15:02:57.244996  489211 system_pods.go:61] "coredns-66bc5c9577-zhrs7" [6d450543-7e6c-43d8-93ac-9ceca2afe29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:02:57.245003  489211 system_pods.go:61] "etcd-default-k8s-diff-port-124330" [8e827f48-9cc4-469d-a51a-af4fcfbff43f] Running
	I1121 15:02:57.245009  489211 system_pods.go:61] "kindnet-wdpnm" [8808169a-c3a4-4b7c-8703-356c5678bb6a] Running
	I1121 15:02:57.245015  489211 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-124330" [a9842c68-c43c-4c9c-bcc6-f9278c853ba1] Running
	I1121 15:02:57.245019  489211 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-124330" [c388eb67-dcdf-480e-bd3e-d2e7dda823c2] Running
	I1121 15:02:57.245024  489211 system_pods.go:61] "kube-proxy-fr5df" [968146ae-c634-4d71-88d9-dd180b847494] Running
	I1121 15:02:57.245028  489211 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-124330" [0b217514-f104-4cb6-88bf-36c746a3fff2] Running
	I1121 15:02:57.245033  489211 system_pods.go:61] "storage-provisioner" [72853767-c110-4974-813d-a43eb4ea90a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:02:57.245040  489211 system_pods.go:74] duration metric: took 5.74442ms to wait for pod list to return data ...
	I1121 15:02:57.245050  489211 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:02:57.254861  489211 default_sa.go:45] found service account: "default"
	I1121 15:02:57.254939  489211 default_sa.go:55] duration metric: took 9.882674ms for default service account to be created ...
	I1121 15:02:57.254975  489211 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 15:02:57.281803  489211 system_pods.go:86] 8 kube-system pods found
	I1121 15:02:57.281832  489211 system_pods.go:89] "coredns-66bc5c9577-zhrs7" [6d450543-7e6c-43d8-93ac-9ceca2afe29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:02:57.281840  489211 system_pods.go:89] "etcd-default-k8s-diff-port-124330" [8e827f48-9cc4-469d-a51a-af4fcfbff43f] Running
	I1121 15:02:57.281846  489211 system_pods.go:89] "kindnet-wdpnm" [8808169a-c3a4-4b7c-8703-356c5678bb6a] Running
	I1121 15:02:57.281850  489211 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-124330" [a9842c68-c43c-4c9c-bcc6-f9278c853ba1] Running
	I1121 15:02:57.281855  489211 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-124330" [c388eb67-dcdf-480e-bd3e-d2e7dda823c2] Running
	I1121 15:02:57.281860  489211 system_pods.go:89] "kube-proxy-fr5df" [968146ae-c634-4d71-88d9-dd180b847494] Running
	I1121 15:02:57.281864  489211 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-124330" [0b217514-f104-4cb6-88bf-36c746a3fff2] Running
	I1121 15:02:57.281869  489211 system_pods.go:89] "storage-provisioner" [72853767-c110-4974-813d-a43eb4ea90a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:02:57.281897  489211 retry.go:31] will retry after 212.204634ms: missing components: kube-dns
	I1121 15:02:57.507373  489211 system_pods.go:86] 8 kube-system pods found
	I1121 15:02:57.507402  489211 system_pods.go:89] "coredns-66bc5c9577-zhrs7" [6d450543-7e6c-43d8-93ac-9ceca2afe29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:02:57.507408  489211 system_pods.go:89] "etcd-default-k8s-diff-port-124330" [8e827f48-9cc4-469d-a51a-af4fcfbff43f] Running
	I1121 15:02:57.507414  489211 system_pods.go:89] "kindnet-wdpnm" [8808169a-c3a4-4b7c-8703-356c5678bb6a] Running
	I1121 15:02:57.507418  489211 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-124330" [a9842c68-c43c-4c9c-bcc6-f9278c853ba1] Running
	I1121 15:02:57.507422  489211 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-124330" [c388eb67-dcdf-480e-bd3e-d2e7dda823c2] Running
	I1121 15:02:57.507426  489211 system_pods.go:89] "kube-proxy-fr5df" [968146ae-c634-4d71-88d9-dd180b847494] Running
	I1121 15:02:57.507430  489211 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-124330" [0b217514-f104-4cb6-88bf-36c746a3fff2] Running
	I1121 15:02:57.507436  489211 system_pods.go:89] "storage-provisioner" [72853767-c110-4974-813d-a43eb4ea90a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:02:57.507450  489211 retry.go:31] will retry after 360.006251ms: missing components: kube-dns
	I1121 15:02:57.873550  489211 system_pods.go:86] 8 kube-system pods found
	I1121 15:02:57.873663  489211 system_pods.go:89] "coredns-66bc5c9577-zhrs7" [6d450543-7e6c-43d8-93ac-9ceca2afe29a] Running
	I1121 15:02:57.873737  489211 system_pods.go:89] "etcd-default-k8s-diff-port-124330" [8e827f48-9cc4-469d-a51a-af4fcfbff43f] Running
	I1121 15:02:57.873771  489211 system_pods.go:89] "kindnet-wdpnm" [8808169a-c3a4-4b7c-8703-356c5678bb6a] Running
	I1121 15:02:57.873790  489211 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-124330" [a9842c68-c43c-4c9c-bcc6-f9278c853ba1] Running
	I1121 15:02:57.873837  489211 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-124330" [c388eb67-dcdf-480e-bd3e-d2e7dda823c2] Running
	I1121 15:02:57.873862  489211 system_pods.go:89] "kube-proxy-fr5df" [968146ae-c634-4d71-88d9-dd180b847494] Running
	I1121 15:02:57.873893  489211 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-124330" [0b217514-f104-4cb6-88bf-36c746a3fff2] Running
	I1121 15:02:57.873933  489211 system_pods.go:89] "storage-provisioner" [72853767-c110-4974-813d-a43eb4ea90a6] Running
	I1121 15:02:57.873972  489211 system_pods.go:126] duration metric: took 618.975133ms to wait for k8s-apps to be running ...
	I1121 15:02:57.874013  489211 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 15:02:57.874115  489211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:02:57.903361  489211 system_svc.go:56] duration metric: took 29.321194ms WaitForService to wait for kubelet
	I1121 15:02:57.903453  489211 kubeadm.go:587] duration metric: took 42.255740745s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:02:57.903496  489211 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:02:57.908109  489211 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:02:57.908210  489211 node_conditions.go:123] node cpu capacity is 2
	I1121 15:02:57.908247  489211 node_conditions.go:105] duration metric: took 4.719653ms to run NodePressure ...
	I1121 15:02:57.908299  489211 start.go:242] waiting for startup goroutines ...
	I1121 15:02:57.908345  489211 start.go:247] waiting for cluster config update ...
	I1121 15:02:57.908412  489211 start.go:256] writing updated cluster config ...
	I1121 15:02:57.908831  489211 ssh_runner.go:195] Run: rm -f paused
	I1121 15:02:57.913796  489211 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:02:57.919831  489211 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zhrs7" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:57.928080  489211 pod_ready.go:94] pod "coredns-66bc5c9577-zhrs7" is "Ready"
	I1121 15:02:57.928169  489211 pod_ready.go:86] duration metric: took 8.238495ms for pod "coredns-66bc5c9577-zhrs7" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:57.931824  489211 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:57.942142  489211 pod_ready.go:94] pod "etcd-default-k8s-diff-port-124330" is "Ready"
	I1121 15:02:57.942245  489211 pod_ready.go:86] duration metric: took 10.274064ms for pod "etcd-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:57.946600  489211 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:57.953194  489211 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-124330" is "Ready"
	I1121 15:02:57.953262  489211 pod_ready.go:86] duration metric: took 6.56527ms for pod "kube-apiserver-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:57.955896  489211 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:58.323172  489211 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-124330" is "Ready"
	I1121 15:02:58.323196  489211 pod_ready.go:86] duration metric: took 367.217656ms for pod "kube-controller-manager-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:58.518765  489211 pod_ready.go:83] waiting for pod "kube-proxy-fr5df" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:58.918804  489211 pod_ready.go:94] pod "kube-proxy-fr5df" is "Ready"
	I1121 15:02:58.918830  489211 pod_ready.go:86] duration metric: took 400.039888ms for pod "kube-proxy-fr5df" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:59.122830  489211 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:59.518733  489211 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-124330" is "Ready"
	I1121 15:02:59.518762  489211 pod_ready.go:86] duration metric: took 395.908007ms for pod "kube-scheduler-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:59.518775  489211 pod_ready.go:40] duration metric: took 1.604878893s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:02:59.595903  489211 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 15:02:59.599202  489211 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-124330" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.256835425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.270532208Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=700163c8-85fb-47e4-9d8e-8a938179fe2a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.27242581Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-jmrq8/POD" id=940703bd-899e-42c5-a505-b787168765b9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.272611822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.291525978Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=940703bd-899e-42c5-a505-b787168765b9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.29712624Z" level=info msg="Ran pod sandbox d4e19cd090f91d52c4f8dd87d3e89baa605a4fccefd2ee8fd72d43dad30cd079 with infra container: kube-system/kindnet-jssq6/POD" id=700163c8-85fb-47e4-9d8e-8a938179fe2a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.305300891Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=729dfcee-c81a-4d38-b508-95b5d7e36db1 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.307980323Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=72f3c550-f686-4e5c-9ec2-7f877ffac505 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.312814217Z" level=info msg="Creating container: kube-system/kindnet-jssq6/kindnet-cni" id=9f5dc288-8580-4650-8fbe-7671eaa8efb7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.313067595Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.347134608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.347895363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.358397524Z" level=info msg="Ran pod sandbox 3e573abf4b8a0bf606d4acec6c128d59d7476edba6117c61b78b242856f50102 with infra container: kube-system/kube-proxy-jmrq8/POD" id=940703bd-899e-42c5-a505-b787168765b9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.371676185Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0457464a-c6f7-44f9-993e-fabba2dd2f36 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.375190532Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fddf1e67-37c3-46aa-aa66-5ba4d560341d name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.379117929Z" level=info msg="Creating container: kube-system/kube-proxy-jmrq8/kube-proxy" id=fa3de2cf-07b4-43e7-b74c-d9f8d43d69e9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.379429974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.433293508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.434012163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.44931054Z" level=info msg="Created container 91c8c5d6351174940402e2f7125bc3576786fa4abc2808b9d5dfc2a6dce40f72: kube-system/kindnet-jssq6/kindnet-cni" id=9f5dc288-8580-4650-8fbe-7671eaa8efb7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.45017839Z" level=info msg="Starting container: 91c8c5d6351174940402e2f7125bc3576786fa4abc2808b9d5dfc2a6dce40f72" id=4232609c-da8e-4693-aec1-375574b1cc05 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.467109901Z" level=info msg="Started container" PID=1065 containerID=91c8c5d6351174940402e2f7125bc3576786fa4abc2808b9d5dfc2a6dce40f72 description=kube-system/kindnet-jssq6/kindnet-cni id=4232609c-da8e-4693-aec1-375574b1cc05 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d4e19cd090f91d52c4f8dd87d3e89baa605a4fccefd2ee8fd72d43dad30cd079
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.489884476Z" level=info msg="Created container 4358b82c8954ff00c750a8e7c797bb0f9e4326b91ea2d2a26f1f41c7d11f898e: kube-system/kube-proxy-jmrq8/kube-proxy" id=fa3de2cf-07b4-43e7-b74c-d9f8d43d69e9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.491571273Z" level=info msg="Starting container: 4358b82c8954ff00c750a8e7c797bb0f9e4326b91ea2d2a26f1f41c7d11f898e" id=7f6f01f2-35bf-4316-a589-c91b4ce51c9d name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.494808767Z" level=info msg="Started container" PID=1072 containerID=4358b82c8954ff00c750a8e7c797bb0f9e4326b91ea2d2a26f1f41c7d11f898e description=kube-system/kube-proxy-jmrq8/kube-proxy id=7f6f01f2-35bf-4316-a589-c91b4ce51c9d name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e573abf4b8a0bf606d4acec6c128d59d7476edba6117c61b78b242856f50102
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4358b82c8954f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   3e573abf4b8a0       kube-proxy-jmrq8                            kube-system
	91c8c5d635117       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   d4e19cd090f91       kindnet-jssq6                               kube-system
	2edf2c715c49a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   62df6935e425c       kube-controller-manager-newest-cni-714993   kube-system
	730f5e074ea29       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   2a55f7f2ae218       kube-apiserver-newest-cni-714993            kube-system
	a66dde0c76094       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   a8c32deb16c5f       kube-scheduler-newest-cni-714993            kube-system
	6e6566081c03a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   4ef705610f57f       etcd-newest-cni-714993                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-714993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-714993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=newest-cni-714993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T15_02_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 15:02:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-714993
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 15:02:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 15:02:56 +0000   Fri, 21 Nov 2025 15:02:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 15:02:56 +0000   Fri, 21 Nov 2025 15:02:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 15:02:56 +0000   Fri, 21 Nov 2025 15:02:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 21 Nov 2025 15:02:56 +0000   Fri, 21 Nov 2025 15:02:25 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-714993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                bc9159db-8195-45b8-b93a-134eb7c35db1
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-714993                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-jssq6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-714993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-714993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-jmrq8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-714993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientPID     31s                kubelet          Node newest-cni-714993 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node newest-cni-714993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-714993 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-714993 event: Registered Node newest-cni-714993 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-714993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-714993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-714993 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-714993 event: Registered Node newest-cni-714993 in Controller
	
	
	==> dmesg <==
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	[Nov21 14:58] overlayfs: idmapped layers are currently not supported
	[Nov21 14:59] overlayfs: idmapped layers are currently not supported
	[Nov21 15:00] overlayfs: idmapped layers are currently not supported
	[ +13.392744] overlayfs: idmapped layers are currently not supported
	[Nov21 15:01] overlayfs: idmapped layers are currently not supported
	[Nov21 15:02] overlayfs: idmapped layers are currently not supported
	[ +25.555443] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6e6566081c03aa51453671dda29548d283a3156fd32976da87a2e0708b5ca23e] <==
	{"level":"warn","ts":"2025-11-21T15:02:54.171335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.197636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.277143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.304501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.341286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.367639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.385479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.407326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.429410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.452172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.481078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.501005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.521792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.542687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.582083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.589472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.613310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.637898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.658136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.681125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.713953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.731825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.749625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.773948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.870922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44660","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:03:02 up  2:45,  0 user,  load average: 5.26, 3.89, 2.98
	Linux newest-cni-714993 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [91c8c5d6351174940402e2f7125bc3576786fa4abc2808b9d5dfc2a6dce40f72] <==
	I1121 15:02:56.508249       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 15:02:56.510450       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 15:02:56.511532       1 main.go:148] setting mtu 1500 for CNI 
	I1121 15:02:56.511603       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 15:02:56.511644       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T15:02:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 15:02:56.800753       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 15:02:56.800783       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 15:02:56.800797       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 15:02:56.800921       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [730f5e074ea29c84ee762fa289c70402bf32780d481aaa7682e731dd9794d540] <==
	I1121 15:02:56.102461       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 15:02:56.102569       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1121 15:02:56.102770       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 15:02:56.126420       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1121 15:02:56.128250       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1121 15:02:56.185010       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 15:02:56.241241       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1121 15:02:56.246561       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1121 15:02:56.246630       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 15:02:56.276513       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 15:02:56.276622       1 aggregator.go:171] initial CRD sync complete...
	I1121 15:02:56.276633       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 15:02:56.276640       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 15:02:56.276646       1 cache.go:39] Caches are synced for autoregister controller
	E1121 15:02:56.364733       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 15:02:56.683277       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 15:02:57.112046       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 15:02:57.348790       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 15:02:57.461309       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 15:02:57.530901       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 15:02:57.768992       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.98.8"}
	I1121 15:02:57.805390       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.229.103"}
	I1121 15:03:00.681132       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 15:03:00.725662       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 15:03:00.744029       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2edf2c715c49a8ea3535cad8175de2b076e3defaf18e6a36a9e7d31008d89625] <==
	I1121 15:03:00.649173       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 15:03:00.649185       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 15:03:00.650050       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 15:03:00.651585       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 15:03:00.651763       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 15:03:00.650129       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 15:03:00.656252       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 15:03:00.658665       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:03:00.668262       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 15:03:00.673238       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 15:03:00.674089       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 15:03:00.676451       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 15:03:00.696450       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 15:03:00.697067       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 15:03:00.697156       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 15:03:00.697300       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 15:03:00.698727       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 15:03:00.699646       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 15:03:00.700102       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-714993"
	I1121 15:03:00.700206       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 15:03:00.699543       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:03:00.699564       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 15:03:00.699576       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 15:03:00.702391       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 15:03:00.703898       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4358b82c8954ff00c750a8e7c797bb0f9e4326b91ea2d2a26f1f41c7d11f898e] <==
	I1121 15:02:57.865055       1 server_linux.go:53] "Using iptables proxy"
	I1121 15:02:58.078126       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 15:02:58.190555       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 15:02:58.190679       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1121 15:02:58.190788       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 15:02:58.300812       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 15:02:58.300961       1 server_linux.go:132] "Using iptables Proxier"
	I1121 15:02:58.324011       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 15:02:58.324525       1 server.go:527] "Version info" version="v1.34.1"
	I1121 15:02:58.324822       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:02:58.326120       1 config.go:200] "Starting service config controller"
	I1121 15:02:58.326189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 15:02:58.326230       1 config.go:106] "Starting endpoint slice config controller"
	I1121 15:02:58.326256       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 15:02:58.326291       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 15:02:58.326317       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 15:02:58.334006       1 config.go:309] "Starting node config controller"
	I1121 15:02:58.334096       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 15:02:58.334130       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 15:02:58.426354       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 15:02:58.426413       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 15:02:58.426461       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a66dde0c760949de4963a031792226b729cb39589bf9b5e48c1f90fc16d85523] <==
	I1121 15:02:54.668205       1 serving.go:386] Generated self-signed cert in-memory
	I1121 15:02:58.173922       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 15:02:58.174057       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:02:58.179320       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 15:02:58.179707       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 15:02:58.179776       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 15:02:58.179856       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 15:02:58.181847       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:02:58.191336       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:02:58.190722       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:02:58.191837       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:02:58.283726       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 15:02:58.294263       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:02:58.294451       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 15:02:52 newest-cni-714993 kubelet[733]: E1121 15:02:52.969950     733 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-714993\" not found" node="newest-cni-714993"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.821725     733 apiserver.go:52] "Watching apiserver"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.834147     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-714993"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.951074     733 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.986976     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/153afdbe-a8ec-43f0-a76c-4b9c81867c6e-xtables-lock\") pod \"kube-proxy-jmrq8\" (UID: \"153afdbe-a8ec-43f0-a76c-4b9c81867c6e\") " pod="kube-system/kube-proxy-jmrq8"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.987017     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/153afdbe-a8ec-43f0-a76c-4b9c81867c6e-lib-modules\") pod \"kube-proxy-jmrq8\" (UID: \"153afdbe-a8ec-43f0-a76c-4b9c81867c6e\") " pod="kube-system/kube-proxy-jmrq8"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.987069     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/da7ab922-ecf7-449c-9aac-481926be6add-cni-cfg\") pod \"kindnet-jssq6\" (UID: \"da7ab922-ecf7-449c-9aac-481926be6add\") " pod="kube-system/kindnet-jssq6"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.987087     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da7ab922-ecf7-449c-9aac-481926be6add-lib-modules\") pod \"kindnet-jssq6\" (UID: \"da7ab922-ecf7-449c-9aac-481926be6add\") " pod="kube-system/kindnet-jssq6"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.987126     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da7ab922-ecf7-449c-9aac-481926be6add-xtables-lock\") pod \"kindnet-jssq6\" (UID: \"da7ab922-ecf7-449c-9aac-481926be6add\") " pod="kube-system/kindnet-jssq6"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.076856     733 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: E1121 15:02:56.255901     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-714993\" already exists" pod="kube-system/etcd-newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.255930     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.323074     733 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: E1121 15:02:56.327067     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-714993\" already exists" pod="kube-system/kube-apiserver-newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.327120     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.327225     733 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.327385     733 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.336036     733 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: W1121 15:02:56.353662     733 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/crio-3e573abf4b8a0bf606d4acec6c128d59d7476edba6117c61b78b242856f50102 WatchSource:0}: Error finding container 3e573abf4b8a0bf606d4acec6c128d59d7476edba6117c61b78b242856f50102: Status 404 returned error can't find the container with id 3e573abf4b8a0bf606d4acec6c128d59d7476edba6117c61b78b242856f50102
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: E1121 15:02:56.371044     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-714993\" already exists" pod="kube-system/kube-controller-manager-newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.371081     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: E1121 15:02:56.405777     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-714993\" already exists" pod="kube-system/kube-scheduler-newest-cni-714993"
	Nov 21 15:02:59 newest-cni-714993 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 15:02:59 newest-cni-714993 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 15:02:59 newest-cni-714993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-714993 -n newest-cni-714993
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-714993 -n newest-cni-714993: exit status 2 (420.289991ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-714993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gg7hh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c6w9g kubernetes-dashboard-855c9754f9-p7ftl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-714993 describe pod coredns-66bc5c9577-gg7hh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c6w9g kubernetes-dashboard-855c9754f9-p7ftl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-714993 describe pod coredns-66bc5c9577-gg7hh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c6w9g kubernetes-dashboard-855c9754f9-p7ftl: exit status 1 (91.879869ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gg7hh" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-c6w9g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-p7ftl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-714993 describe pod coredns-66bc5c9577-gg7hh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c6w9g kubernetes-dashboard-855c9754f9-p7ftl: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-714993
helpers_test.go:243: (dbg) docker inspect newest-cni-714993:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a",
	        "Created": "2025-11-21T15:02:00.230610086Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T15:02:42.684711688Z",
	            "FinishedAt": "2025-11-21T15:02:41.627676029Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/hostname",
	        "HostsPath": "/var/lib/docker/containers/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/hosts",
	        "LogPath": "/var/lib/docker/containers/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a-json.log",
	        "Name": "/newest-cni-714993",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-714993:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-714993",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a",
	                "LowerDir": "/var/lib/docker/overlay2/0f94cb91fca9e0d93d6363f98feac79ea3c7a145b555492488266c975a6945f1-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f94cb91fca9e0d93d6363f98feac79ea3c7a145b555492488266c975a6945f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f94cb91fca9e0d93d6363f98feac79ea3c7a145b555492488266c975a6945f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f94cb91fca9e0d93d6363f98feac79ea3c7a145b555492488266c975a6945f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-714993",
	                "Source": "/var/lib/docker/volumes/newest-cni-714993/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-714993",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-714993",
	                "name.minikube.sigs.k8s.io": "newest-cni-714993",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0fb16c59c58f08c0514c75b8947cacc116f597fd50608cf63e7d50eb45655083",
	            "SandboxKey": "/var/run/docker/netns/0fb16c59c58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-714993": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:db:e5:67:23:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1cfea5893685cdf8198d74a4f99484841fa068338f22db34f688b7b58b6435e9",
	                    "EndpointID": "59171f746596d7d8919058cb99b2f9c27cf5ddbc1266ee34b098ae0036b53fbb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-714993",
	                        "bc5829e976c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-714993 -n newest-cni-714993
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-714993 -n newest-cni-714993: exit status 2 (388.430094ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-714993 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-714993 logs -n 25: (1.115997548s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 14:58 UTC │ 21 Nov 25 14:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-844780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ stop    │ -p no-preload-844780 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ addons  │ enable metrics-server -p embed-certs-902161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-844780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ stop    │ -p embed-certs-902161 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-902161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ image   │ no-preload-844780 image list --format=json                                                                                                                                                                                                    │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p no-preload-844780 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:02 UTC │
	│ image   │ embed-certs-902161 image list --format=json                                                                                                                                                                                                   │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p embed-certs-902161 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p embed-certs-902161                                                                                                                                                                                                                         │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p embed-certs-902161                                                                                                                                                                                                                         │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:02 UTC │
	│ addons  │ enable metrics-server -p newest-cni-714993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │                     │
	│ stop    │ -p newest-cni-714993 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-714993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ start   │ -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ image   │ newest-cni-714993 image list --format=json                                                                                                                                                                                                    │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ pause   │ -p newest-cni-714993 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 15:02:42
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 15:02:42.390744  496079 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:02:42.390917  496079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:02:42.390929  496079 out.go:374] Setting ErrFile to fd 2...
	I1121 15:02:42.390934  496079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:02:42.391226  496079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:02:42.391661  496079 out.go:368] Setting JSON to false
	I1121 15:02:42.392805  496079 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9914,"bootTime":1763727448,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 15:02:42.392885  496079 start.go:143] virtualization:  
	I1121 15:02:42.396085  496079 out.go:179] * [newest-cni-714993] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 15:02:42.400238  496079 notify.go:221] Checking for updates...
	I1121 15:02:42.401144  496079 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 15:02:42.404265  496079 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 15:02:42.407545  496079 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:02:42.410461  496079 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 15:02:42.413463  496079 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 15:02:42.416275  496079 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 15:02:42.419551  496079 config.go:182] Loaded profile config "newest-cni-714993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:02:42.420171  496079 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 15:02:42.446989  496079 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 15:02:42.447128  496079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:02:42.522186  496079 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 15:02:42.511863031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:02:42.522295  496079 docker.go:319] overlay module found
	I1121 15:02:42.525383  496079 out.go:179] * Using the docker driver based on existing profile
	I1121 15:02:42.528304  496079 start.go:309] selected driver: docker
	I1121 15:02:42.528331  496079 start.go:930] validating driver "docker" against &{Name:newest-cni-714993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:02:42.528478  496079 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 15:02:42.529347  496079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:02:42.595215  496079 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 15:02:42.586032197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:02:42.595553  496079 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 15:02:42.595588  496079 cni.go:84] Creating CNI manager for ""
	I1121 15:02:42.595649  496079 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:02:42.595693  496079 start.go:353] cluster config:
	{Name:newest-cni-714993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:02:42.600974  496079 out.go:179] * Starting "newest-cni-714993" primary control-plane node in "newest-cni-714993" cluster
	I1121 15:02:42.603877  496079 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 15:02:42.606705  496079 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 15:02:42.609556  496079 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:02:42.609613  496079 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 15:02:42.609639  496079 cache.go:65] Caching tarball of preloaded images
	I1121 15:02:42.609636  496079 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 15:02:42.609723  496079 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 15:02:42.609733  496079 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 15:02:42.609850  496079 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/config.json ...
	I1121 15:02:42.629923  496079 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 15:02:42.629951  496079 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 15:02:42.629964  496079 cache.go:243] Successfully downloaded all kic artifacts
	I1121 15:02:42.629987  496079 start.go:360] acquireMachinesLock for newest-cni-714993: {Name:mk4fe5ba68b949796f6324fdcc6a0615ddd88762 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 15:02:42.630044  496079 start.go:364] duration metric: took 38.68µs to acquireMachinesLock for "newest-cni-714993"
	I1121 15:02:42.630075  496079 start.go:96] Skipping create...Using existing machine configuration
	I1121 15:02:42.630084  496079 fix.go:54] fixHost starting: 
	I1121 15:02:42.630389  496079 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:42.648865  496079 fix.go:112] recreateIfNeeded on newest-cni-714993: state=Stopped err=<nil>
	W1121 15:02:42.648899  496079 fix.go:138] unexpected machine state, will restart: <nil>
	W1121 15:02:40.212198  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	W1121 15:02:42.213324  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	I1121 15:02:42.652107  496079 out.go:252] * Restarting existing docker container for "newest-cni-714993" ...
	I1121 15:02:42.652187  496079 cli_runner.go:164] Run: docker start newest-cni-714993
	I1121 15:02:42.915196  496079 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:42.943713  496079 kic.go:430] container "newest-cni-714993" state is running.
	I1121 15:02:42.945898  496079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-714993
	I1121 15:02:42.967257  496079 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/config.json ...
	I1121 15:02:42.967488  496079 machine.go:94] provisionDockerMachine start ...
	I1121 15:02:42.967547  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:42.993317  496079 main.go:143] libmachine: Using SSH client type: native
	I1121 15:02:42.993641  496079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1121 15:02:42.993650  496079 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 15:02:42.995557  496079 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51664->127.0.0.1:33458: read: connection reset by peer
	I1121 15:02:46.148592  496079 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-714993
	
	I1121 15:02:46.148617  496079 ubuntu.go:182] provisioning hostname "newest-cni-714993"
	I1121 15:02:46.148681  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:46.171842  496079 main.go:143] libmachine: Using SSH client type: native
	I1121 15:02:46.172161  496079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1121 15:02:46.172173  496079 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-714993 && echo "newest-cni-714993" | sudo tee /etc/hostname
	I1121 15:02:46.332036  496079 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-714993
	
	I1121 15:02:46.332134  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:46.350244  496079 main.go:143] libmachine: Using SSH client type: native
	I1121 15:02:46.350544  496079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1121 15:02:46.350576  496079 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-714993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-714993/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-714993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 15:02:46.492742  496079 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 15:02:46.492768  496079 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 15:02:46.492794  496079 ubuntu.go:190] setting up certificates
	I1121 15:02:46.492805  496079 provision.go:84] configureAuth start
	I1121 15:02:46.492866  496079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-714993
	I1121 15:02:46.516424  496079 provision.go:143] copyHostCerts
	I1121 15:02:46.516505  496079 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 15:02:46.516529  496079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 15:02:46.516609  496079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 15:02:46.516709  496079 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 15:02:46.516726  496079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 15:02:46.516754  496079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 15:02:46.516812  496079 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 15:02:46.516820  496079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 15:02:46.516843  496079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 15:02:46.516894  496079 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.newest-cni-714993 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-714993]
	I1121 15:02:47.056112  496079 provision.go:177] copyRemoteCerts
	I1121 15:02:47.056188  496079 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 15:02:47.056234  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:47.074880  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:47.184737  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 15:02:47.204120  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 15:02:47.227078  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 15:02:47.254218  496079 provision.go:87] duration metric: took 761.398483ms to configureAuth
	I1121 15:02:47.254247  496079 ubuntu.go:206] setting minikube options for container-runtime
	I1121 15:02:47.254497  496079 config.go:182] Loaded profile config "newest-cni-714993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:02:47.254646  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:47.273220  496079 main.go:143] libmachine: Using SSH client type: native
	I1121 15:02:47.273526  496079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1121 15:02:47.273556  496079 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 15:02:47.603481  496079 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 15:02:47.603546  496079 machine.go:97] duration metric: took 4.636045882s to provisionDockerMachine
	I1121 15:02:47.603562  496079 start.go:293] postStartSetup for "newest-cni-714993" (driver="docker")
	I1121 15:02:47.603574  496079 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 15:02:47.603654  496079 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 15:02:47.603702  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:47.622002  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:47.729434  496079 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 15:02:47.733186  496079 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 15:02:47.733217  496079 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 15:02:47.733246  496079 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 15:02:47.733333  496079 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 15:02:47.733502  496079 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 15:02:47.733681  496079 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 15:02:47.742577  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:02:47.764276  496079 start.go:296] duration metric: took 160.697434ms for postStartSetup
	I1121 15:02:47.764419  496079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 15:02:47.764464  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:47.782075  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:47.881690  496079 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 15:02:47.886594  496079 fix.go:56] duration metric: took 5.256501377s for fixHost
	I1121 15:02:47.886621  496079 start.go:83] releasing machines lock for "newest-cni-714993", held for 5.256562786s
	I1121 15:02:47.886685  496079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-714993
	I1121 15:02:47.903454  496079 ssh_runner.go:195] Run: cat /version.json
	I1121 15:02:47.903510  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:47.903649  496079 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 15:02:47.903703  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:47.924815  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:47.938428  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:48.024801  496079 ssh_runner.go:195] Run: systemctl --version
	I1121 15:02:48.127467  496079 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 15:02:48.164336  496079 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 15:02:48.169049  496079 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 15:02:48.169135  496079 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 15:02:48.178845  496079 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 15:02:48.178867  496079 start.go:496] detecting cgroup driver to use...
	I1121 15:02:48.178900  496079 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 15:02:48.178962  496079 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 15:02:48.194859  496079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 15:02:48.213524  496079 docker.go:218] disabling cri-docker service (if available) ...
	I1121 15:02:48.213610  496079 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 15:02:48.229437  496079 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 15:02:48.244241  496079 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 15:02:48.378061  496079 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 15:02:48.499991  496079 docker.go:234] disabling docker service ...
	I1121 15:02:48.500081  496079 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 15:02:48.517060  496079 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 15:02:48.535746  496079 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 15:02:48.657717  496079 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 15:02:48.787732  496079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 15:02:48.802604  496079 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 15:02:48.816743  496079 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 15:02:48.816859  496079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.826770  496079 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 15:02:48.826914  496079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.836458  496079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.846662  496079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.855700  496079 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 15:02:48.864154  496079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.873750  496079 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.882448  496079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:02:48.891141  496079 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 15:02:48.899377  496079 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 15:02:48.907383  496079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:02:49.027459  496079 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 15:02:49.203884  496079 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 15:02:49.204007  496079 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 15:02:49.208624  496079 start.go:564] Will wait 60s for crictl version
	I1121 15:02:49.208739  496079 ssh_runner.go:195] Run: which crictl
	I1121 15:02:49.213274  496079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 15:02:49.243703  496079 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 15:02:49.243793  496079 ssh_runner.go:195] Run: crio --version
	I1121 15:02:49.277206  496079 ssh_runner.go:195] Run: crio --version
	I1121 15:02:49.328716  496079 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 15:02:49.331606  496079 cli_runner.go:164] Run: docker network inspect newest-cni-714993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 15:02:49.350008  496079 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 15:02:49.353884  496079 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:02:49.366483  496079 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1121 15:02:44.710629  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	W1121 15:02:46.711463  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	W1121 15:02:49.211292  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	I1121 15:02:49.369308  496079 kubeadm.go:884] updating cluster {Name:newest-cni-714993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 15:02:49.369460  496079 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:02:49.369533  496079 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:02:49.405700  496079 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:02:49.405726  496079 crio.go:433] Images already preloaded, skipping extraction
	I1121 15:02:49.405791  496079 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:02:49.431974  496079 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:02:49.432001  496079 cache_images.go:86] Images are preloaded, skipping loading
	I1121 15:02:49.432009  496079 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1121 15:02:49.432114  496079 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-714993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 15:02:49.432203  496079 ssh_runner.go:195] Run: crio config
	I1121 15:02:49.490938  496079 cni.go:84] Creating CNI manager for ""
	I1121 15:02:49.490963  496079 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:02:49.490980  496079 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1121 15:02:49.491003  496079 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-714993 NodeName:newest-cni-714993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 15:02:49.491148  496079 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-714993"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 15:02:49.491221  496079 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 15:02:49.498994  496079 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 15:02:49.499062  496079 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 15:02:49.506800  496079 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1121 15:02:49.520306  496079 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 15:02:49.533712  496079 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1121 15:02:49.546858  496079 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 15:02:49.551381  496079 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:02:49.561367  496079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:02:49.685924  496079 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:02:49.701694  496079 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993 for IP: 192.168.76.2
	I1121 15:02:49.701768  496079 certs.go:195] generating shared ca certs ...
	I1121 15:02:49.701800  496079 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:49.701999  496079 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 15:02:49.702064  496079 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 15:02:49.702107  496079 certs.go:257] generating profile certs ...
	I1121 15:02:49.702266  496079 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/client.key
	I1121 15:02:49.702377  496079 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.key.90646b61
	I1121 15:02:49.702456  496079 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.key
	I1121 15:02:49.702627  496079 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 15:02:49.702690  496079 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 15:02:49.702716  496079 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 15:02:49.702775  496079 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 15:02:49.702835  496079 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 15:02:49.702882  496079 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 15:02:49.702958  496079 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:02:49.703800  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 15:02:49.727104  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 15:02:49.747558  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 15:02:49.768916  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 15:02:49.793932  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 15:02:49.817553  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 15:02:49.840169  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 15:02:49.863524  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/newest-cni-714993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 15:02:49.886827  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 15:02:49.912104  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 15:02:49.938650  496079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 15:02:49.961725  496079 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 15:02:49.975838  496079 ssh_runner.go:195] Run: openssl version
	I1121 15:02:49.983766  496079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 15:02:49.994392  496079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:02:49.998273  496079 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:02:49.998344  496079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:02:50.044482  496079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 15:02:50.053859  496079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 15:02:50.062982  496079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 15:02:50.067176  496079 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 15:02:50.067315  496079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 15:02:50.109852  496079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 15:02:50.118270  496079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 15:02:50.127502  496079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 15:02:50.131588  496079 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 15:02:50.131663  496079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 15:02:50.174377  496079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 15:02:50.183598  496079 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 15:02:50.187934  496079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 15:02:50.229550  496079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 15:02:50.277404  496079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 15:02:50.323056  496079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 15:02:50.403921  496079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 15:02:50.503153  496079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 15:02:50.623174  496079 kubeadm.go:401] StartCluster: {Name:newest-cni-714993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-714993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:02:50.623380  496079 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 15:02:50.623508  496079 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 15:02:50.678529  496079 cri.go:89] found id: "2edf2c715c49a8ea3535cad8175de2b076e3defaf18e6a36a9e7d31008d89625"
	I1121 15:02:50.678629  496079 cri.go:89] found id: "730f5e074ea29c84ee762fa289c70402bf32780d481aaa7682e731dd9794d540"
	I1121 15:02:50.678649  496079 cri.go:89] found id: "a66dde0c760949de4963a031792226b729cb39589bf9b5e48c1f90fc16d85523"
	I1121 15:02:50.678697  496079 cri.go:89] found id: "6e6566081c03aa51453671dda29548d283a3156fd32976da87a2e0708b5ca23e"
	I1121 15:02:50.678721  496079 cri.go:89] found id: ""
	I1121 15:02:50.678831  496079 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 15:02:50.703062  496079 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:02:50Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:02:50.703240  496079 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 15:02:50.715059  496079 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 15:02:50.715079  496079 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 15:02:50.715134  496079 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 15:02:50.733115  496079 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 15:02:50.733897  496079 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-714993" does not appear in /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:02:50.734306  496079 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-289204/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-714993" cluster setting kubeconfig missing "newest-cni-714993" context setting]
	I1121 15:02:50.734918  496079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:50.736740  496079 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 15:02:50.745366  496079 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1121 15:02:50.745448  496079 kubeadm.go:602] duration metric: took 30.362158ms to restartPrimaryControlPlane
	I1121 15:02:50.745472  496079 kubeadm.go:403] duration metric: took 122.307891ms to StartCluster
	I1121 15:02:50.745515  496079 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:50.745606  496079 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:02:50.746612  496079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:02:50.746912  496079 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:02:50.747472  496079 config.go:182] Loaded profile config "newest-cni-714993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:02:50.747423  496079 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 15:02:50.747679  496079 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-714993"
	I1121 15:02:50.747721  496079 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-714993"
	W1121 15:02:50.747743  496079 addons.go:248] addon storage-provisioner should already be in state true
	I1121 15:02:50.747829  496079 host.go:66] Checking if "newest-cni-714993" exists ...
	I1121 15:02:50.748451  496079 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:50.748657  496079 addons.go:70] Setting dashboard=true in profile "newest-cni-714993"
	I1121 15:02:50.748701  496079 addons.go:239] Setting addon dashboard=true in "newest-cni-714993"
	W1121 15:02:50.748722  496079 addons.go:248] addon dashboard should already be in state true
	I1121 15:02:50.748805  496079 host.go:66] Checking if "newest-cni-714993" exists ...
	I1121 15:02:50.748961  496079 addons.go:70] Setting default-storageclass=true in profile "newest-cni-714993"
	I1121 15:02:50.748993  496079 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-714993"
	I1121 15:02:50.749278  496079 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:50.749377  496079 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:50.755451  496079 out.go:179] * Verifying Kubernetes components...
	I1121 15:02:50.758751  496079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:02:50.799096  496079 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 15:02:50.802525  496079 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1121 15:02:50.808522  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 15:02:50.808552  496079 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 15:02:50.808694  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:50.812000  496079 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 15:02:50.818481  496079 addons.go:239] Setting addon default-storageclass=true in "newest-cni-714993"
	W1121 15:02:50.818512  496079 addons.go:248] addon default-storageclass should already be in state true
	I1121 15:02:50.818554  496079 host.go:66] Checking if "newest-cni-714993" exists ...
	I1121 15:02:50.819084  496079 cli_runner.go:164] Run: docker container inspect newest-cni-714993 --format={{.State.Status}}
	I1121 15:02:50.822097  496079 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:02:50.822141  496079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 15:02:50.822209  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:50.874684  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:50.875518  496079 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 15:02:50.875535  496079 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 15:02:50.875786  496079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-714993
	I1121 15:02:50.882510  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:50.909181  496079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/newest-cni-714993/id_rsa Username:docker}
	I1121 15:02:51.176297  496079 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:02:51.185545  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 15:02:51.185637  496079 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 15:02:51.205710  496079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 15:02:51.231325  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 15:02:51.231411  496079 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 15:02:51.238398  496079 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:02:51.238474  496079 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:02:51.268184  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 15:02:51.268210  496079 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 15:02:51.295505  496079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:02:51.354217  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 15:02:51.354241  496079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 15:02:51.404607  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 15:02:51.404634  496079 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 15:02:51.483045  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 15:02:51.483065  496079 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 15:02:51.569523  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 15:02:51.569549  496079 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 15:02:51.615112  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 15:02:51.615137  496079 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 15:02:51.649666  496079 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 15:02:51.649690  496079 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 15:02:51.668234  496079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1121 15:02:51.716272  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	W1121 15:02:54.212012  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	I1121 15:02:56.113453  496079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.907650842s)
	I1121 15:02:56.113809  496079 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.875316755s)
	I1121 15:02:56.113837  496079 api_server.go:72] duration metric: took 5.366878137s to wait for apiserver process to appear ...
	I1121 15:02:56.113847  496079 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:02:56.113860  496079 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:02:56.300639  496079 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 15:02:56.300674  496079 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 15:02:56.614268  496079 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:02:56.625340  496079 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 15:02:56.625375  496079 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 15:02:57.114692  496079 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:02:57.123634  496079 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 15:02:57.123664  496079 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 15:02:57.614484  496079 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:02:57.664717  496079 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 15:02:57.664824  496079 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 15:02:57.844307  496079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.548765867s)
	I1121 15:02:57.844446  496079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.17618026s)
	I1121 15:02:57.847552  496079 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-714993 addons enable metrics-server
	
	I1121 15:02:57.850468  496079 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1121 15:02:57.853345  496079 addons.go:530] duration metric: took 7.105909279s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1121 15:02:58.113985  496079 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:02:58.126506  496079 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 15:02:58.128017  496079 api_server.go:141] control plane version: v1.34.1
	I1121 15:02:58.128040  496079 api_server.go:131] duration metric: took 2.014186716s to wait for apiserver health ...
	I1121 15:02:58.128049  496079 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:02:58.135199  496079 system_pods.go:59] 8 kube-system pods found
	I1121 15:02:58.135236  496079 system_pods.go:61] "coredns-66bc5c9577-gg7hh" [9870c48c-8548-4838-8cb4-9174010fdcd0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 15:02:58.135247  496079 system_pods.go:61] "etcd-newest-cni-714993" [c62fc121-98a5-4101-a4f7-b563520e09a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:02:58.135255  496079 system_pods.go:61] "kindnet-jssq6" [da7ab922-ecf7-449c-9aac-481926be6add] Running
	I1121 15:02:58.135262  496079 system_pods.go:61] "kube-apiserver-newest-cni-714993" [a9d0ff40-44af-4f7c-beed-5c0b8061b718] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 15:02:58.135268  496079 system_pods.go:61] "kube-controller-manager-newest-cni-714993" [f3df974c-7a27-41a2-aaae-664da491b689] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 15:02:58.135273  496079 system_pods.go:61] "kube-proxy-jmrq8" [153afdbe-a8ec-43f0-a76c-4b9c81867c6e] Running
	I1121 15:02:58.135279  496079 system_pods.go:61] "kube-scheduler-newest-cni-714993" [257585ec-0c18-4a96-a45d-d924cf069dff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:02:58.135285  496079 system_pods.go:61] "storage-provisioner" [36238ceb-8620-4b93-9a0a-f802b27e8c16] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 15:02:58.135291  496079 system_pods.go:74] duration metric: took 7.236121ms to wait for pod list to return data ...
	I1121 15:02:58.135300  496079 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:02:58.140155  496079 default_sa.go:45] found service account: "default"
	I1121 15:02:58.140178  496079 default_sa.go:55] duration metric: took 4.872959ms for default service account to be created ...
	I1121 15:02:58.140191  496079 kubeadm.go:587] duration metric: took 7.393230073s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 15:02:58.140209  496079 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:02:58.143876  496079 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:02:58.143961  496079 node_conditions.go:123] node cpu capacity is 2
	I1121 15:02:58.143989  496079 node_conditions.go:105] duration metric: took 3.774329ms to run NodePressure ...
	I1121 15:02:58.144032  496079 start.go:242] waiting for startup goroutines ...
	I1121 15:02:58.144055  496079 start.go:247] waiting for cluster config update ...
	I1121 15:02:58.144079  496079 start.go:256] writing updated cluster config ...
	I1121 15:02:58.144429  496079 ssh_runner.go:195] Run: rm -f paused
	I1121 15:02:58.226616  496079 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 15:02:58.230233  496079 out.go:179] * Done! kubectl is now configured to use "newest-cni-714993" cluster and "default" namespace by default
	W1121 15:02:56.709769  489211 node_ready.go:57] node "default-k8s-diff-port-124330" has "Ready":"False" status (will retry)
	I1121 15:02:57.209883  489211 node_ready.go:49] node "default-k8s-diff-port-124330" is "Ready"
	I1121 15:02:57.209907  489211 node_ready.go:38] duration metric: took 39.502764026s for node "default-k8s-diff-port-124330" to be "Ready" ...
	I1121 15:02:57.209922  489211 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:02:57.209981  489211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:02:57.226304  489211 api_server.go:72] duration metric: took 41.578586729s to wait for apiserver process to appear ...
	I1121 15:02:57.226325  489211 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:02:57.226343  489211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1121 15:02:57.236434  489211 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1121 15:02:57.239252  489211 api_server.go:141] control plane version: v1.34.1
	I1121 15:02:57.239280  489211 api_server.go:131] duration metric: took 12.947613ms to wait for apiserver health ...
	I1121 15:02:57.239289  489211 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:02:57.244962  489211 system_pods.go:59] 8 kube-system pods found
	I1121 15:02:57.244996  489211 system_pods.go:61] "coredns-66bc5c9577-zhrs7" [6d450543-7e6c-43d8-93ac-9ceca2afe29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:02:57.245003  489211 system_pods.go:61] "etcd-default-k8s-diff-port-124330" [8e827f48-9cc4-469d-a51a-af4fcfbff43f] Running
	I1121 15:02:57.245009  489211 system_pods.go:61] "kindnet-wdpnm" [8808169a-c3a4-4b7c-8703-356c5678bb6a] Running
	I1121 15:02:57.245015  489211 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-124330" [a9842c68-c43c-4c9c-bcc6-f9278c853ba1] Running
	I1121 15:02:57.245019  489211 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-124330" [c388eb67-dcdf-480e-bd3e-d2e7dda823c2] Running
	I1121 15:02:57.245024  489211 system_pods.go:61] "kube-proxy-fr5df" [968146ae-c634-4d71-88d9-dd180b847494] Running
	I1121 15:02:57.245028  489211 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-124330" [0b217514-f104-4cb6-88bf-36c746a3fff2] Running
	I1121 15:02:57.245033  489211 system_pods.go:61] "storage-provisioner" [72853767-c110-4974-813d-a43eb4ea90a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:02:57.245040  489211 system_pods.go:74] duration metric: took 5.74442ms to wait for pod list to return data ...
	I1121 15:02:57.245050  489211 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:02:57.254861  489211 default_sa.go:45] found service account: "default"
	I1121 15:02:57.254939  489211 default_sa.go:55] duration metric: took 9.882674ms for default service account to be created ...
	I1121 15:02:57.254975  489211 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 15:02:57.281803  489211 system_pods.go:86] 8 kube-system pods found
	I1121 15:02:57.281832  489211 system_pods.go:89] "coredns-66bc5c9577-zhrs7" [6d450543-7e6c-43d8-93ac-9ceca2afe29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:02:57.281840  489211 system_pods.go:89] "etcd-default-k8s-diff-port-124330" [8e827f48-9cc4-469d-a51a-af4fcfbff43f] Running
	I1121 15:02:57.281846  489211 system_pods.go:89] "kindnet-wdpnm" [8808169a-c3a4-4b7c-8703-356c5678bb6a] Running
	I1121 15:02:57.281850  489211 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-124330" [a9842c68-c43c-4c9c-bcc6-f9278c853ba1] Running
	I1121 15:02:57.281855  489211 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-124330" [c388eb67-dcdf-480e-bd3e-d2e7dda823c2] Running
	I1121 15:02:57.281860  489211 system_pods.go:89] "kube-proxy-fr5df" [968146ae-c634-4d71-88d9-dd180b847494] Running
	I1121 15:02:57.281864  489211 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-124330" [0b217514-f104-4cb6-88bf-36c746a3fff2] Running
	I1121 15:02:57.281869  489211 system_pods.go:89] "storage-provisioner" [72853767-c110-4974-813d-a43eb4ea90a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:02:57.281897  489211 retry.go:31] will retry after 212.204634ms: missing components: kube-dns
	I1121 15:02:57.507373  489211 system_pods.go:86] 8 kube-system pods found
	I1121 15:02:57.507402  489211 system_pods.go:89] "coredns-66bc5c9577-zhrs7" [6d450543-7e6c-43d8-93ac-9ceca2afe29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:02:57.507408  489211 system_pods.go:89] "etcd-default-k8s-diff-port-124330" [8e827f48-9cc4-469d-a51a-af4fcfbff43f] Running
	I1121 15:02:57.507414  489211 system_pods.go:89] "kindnet-wdpnm" [8808169a-c3a4-4b7c-8703-356c5678bb6a] Running
	I1121 15:02:57.507418  489211 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-124330" [a9842c68-c43c-4c9c-bcc6-f9278c853ba1] Running
	I1121 15:02:57.507422  489211 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-124330" [c388eb67-dcdf-480e-bd3e-d2e7dda823c2] Running
	I1121 15:02:57.507426  489211 system_pods.go:89] "kube-proxy-fr5df" [968146ae-c634-4d71-88d9-dd180b847494] Running
	I1121 15:02:57.507430  489211 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-124330" [0b217514-f104-4cb6-88bf-36c746a3fff2] Running
	I1121 15:02:57.507436  489211 system_pods.go:89] "storage-provisioner" [72853767-c110-4974-813d-a43eb4ea90a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:02:57.507450  489211 retry.go:31] will retry after 360.006251ms: missing components: kube-dns
	I1121 15:02:57.873550  489211 system_pods.go:86] 8 kube-system pods found
	I1121 15:02:57.873663  489211 system_pods.go:89] "coredns-66bc5c9577-zhrs7" [6d450543-7e6c-43d8-93ac-9ceca2afe29a] Running
	I1121 15:02:57.873737  489211 system_pods.go:89] "etcd-default-k8s-diff-port-124330" [8e827f48-9cc4-469d-a51a-af4fcfbff43f] Running
	I1121 15:02:57.873771  489211 system_pods.go:89] "kindnet-wdpnm" [8808169a-c3a4-4b7c-8703-356c5678bb6a] Running
	I1121 15:02:57.873790  489211 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-124330" [a9842c68-c43c-4c9c-bcc6-f9278c853ba1] Running
	I1121 15:02:57.873837  489211 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-124330" [c388eb67-dcdf-480e-bd3e-d2e7dda823c2] Running
	I1121 15:02:57.873862  489211 system_pods.go:89] "kube-proxy-fr5df" [968146ae-c634-4d71-88d9-dd180b847494] Running
	I1121 15:02:57.873893  489211 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-124330" [0b217514-f104-4cb6-88bf-36c746a3fff2] Running
	I1121 15:02:57.873933  489211 system_pods.go:89] "storage-provisioner" [72853767-c110-4974-813d-a43eb4ea90a6] Running
	I1121 15:02:57.873972  489211 system_pods.go:126] duration metric: took 618.975133ms to wait for k8s-apps to be running ...
	I1121 15:02:57.874013  489211 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 15:02:57.874115  489211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:02:57.903361  489211 system_svc.go:56] duration metric: took 29.321194ms WaitForService to wait for kubelet
	I1121 15:02:57.903453  489211 kubeadm.go:587] duration metric: took 42.255740745s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:02:57.903496  489211 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:02:57.908109  489211 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:02:57.908210  489211 node_conditions.go:123] node cpu capacity is 2
	I1121 15:02:57.908247  489211 node_conditions.go:105] duration metric: took 4.719653ms to run NodePressure ...
	I1121 15:02:57.908299  489211 start.go:242] waiting for startup goroutines ...
	I1121 15:02:57.908345  489211 start.go:247] waiting for cluster config update ...
	I1121 15:02:57.908412  489211 start.go:256] writing updated cluster config ...
	I1121 15:02:57.908831  489211 ssh_runner.go:195] Run: rm -f paused
	I1121 15:02:57.913796  489211 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:02:57.919831  489211 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zhrs7" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:57.928080  489211 pod_ready.go:94] pod "coredns-66bc5c9577-zhrs7" is "Ready"
	I1121 15:02:57.928169  489211 pod_ready.go:86] duration metric: took 8.238495ms for pod "coredns-66bc5c9577-zhrs7" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:57.931824  489211 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:57.942142  489211 pod_ready.go:94] pod "etcd-default-k8s-diff-port-124330" is "Ready"
	I1121 15:02:57.942245  489211 pod_ready.go:86] duration metric: took 10.274064ms for pod "etcd-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:57.946600  489211 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:57.953194  489211 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-124330" is "Ready"
	I1121 15:02:57.953262  489211 pod_ready.go:86] duration metric: took 6.56527ms for pod "kube-apiserver-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:57.955896  489211 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:58.323172  489211 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-124330" is "Ready"
	I1121 15:02:58.323196  489211 pod_ready.go:86] duration metric: took 367.217656ms for pod "kube-controller-manager-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:58.518765  489211 pod_ready.go:83] waiting for pod "kube-proxy-fr5df" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:58.918804  489211 pod_ready.go:94] pod "kube-proxy-fr5df" is "Ready"
	I1121 15:02:58.918830  489211 pod_ready.go:86] duration metric: took 400.039888ms for pod "kube-proxy-fr5df" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:59.122830  489211 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:59.518733  489211 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-124330" is "Ready"
	I1121 15:02:59.518762  489211 pod_ready.go:86] duration metric: took 395.908007ms for pod "kube-scheduler-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:02:59.518775  489211 pod_ready.go:40] duration metric: took 1.604878893s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:02:59.595903  489211 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 15:02:59.599202  489211 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-124330" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.256835425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.270532208Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=700163c8-85fb-47e4-9d8e-8a938179fe2a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.27242581Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-jmrq8/POD" id=940703bd-899e-42c5-a505-b787168765b9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.272611822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.291525978Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=940703bd-899e-42c5-a505-b787168765b9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.29712624Z" level=info msg="Ran pod sandbox d4e19cd090f91d52c4f8dd87d3e89baa605a4fccefd2ee8fd72d43dad30cd079 with infra container: kube-system/kindnet-jssq6/POD" id=700163c8-85fb-47e4-9d8e-8a938179fe2a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.305300891Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=729dfcee-c81a-4d38-b508-95b5d7e36db1 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.307980323Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=72f3c550-f686-4e5c-9ec2-7f877ffac505 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.312814217Z" level=info msg="Creating container: kube-system/kindnet-jssq6/kindnet-cni" id=9f5dc288-8580-4650-8fbe-7671eaa8efb7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.313067595Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.347134608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.347895363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.358397524Z" level=info msg="Ran pod sandbox 3e573abf4b8a0bf606d4acec6c128d59d7476edba6117c61b78b242856f50102 with infra container: kube-system/kube-proxy-jmrq8/POD" id=940703bd-899e-42c5-a505-b787168765b9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.371676185Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0457464a-c6f7-44f9-993e-fabba2dd2f36 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.375190532Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fddf1e67-37c3-46aa-aa66-5ba4d560341d name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.379117929Z" level=info msg="Creating container: kube-system/kube-proxy-jmrq8/kube-proxy" id=fa3de2cf-07b4-43e7-b74c-d9f8d43d69e9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.379429974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.433293508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.434012163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.44931054Z" level=info msg="Created container 91c8c5d6351174940402e2f7125bc3576786fa4abc2808b9d5dfc2a6dce40f72: kube-system/kindnet-jssq6/kindnet-cni" id=9f5dc288-8580-4650-8fbe-7671eaa8efb7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.45017839Z" level=info msg="Starting container: 91c8c5d6351174940402e2f7125bc3576786fa4abc2808b9d5dfc2a6dce40f72" id=4232609c-da8e-4693-aec1-375574b1cc05 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.467109901Z" level=info msg="Started container" PID=1065 containerID=91c8c5d6351174940402e2f7125bc3576786fa4abc2808b9d5dfc2a6dce40f72 description=kube-system/kindnet-jssq6/kindnet-cni id=4232609c-da8e-4693-aec1-375574b1cc05 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d4e19cd090f91d52c4f8dd87d3e89baa605a4fccefd2ee8fd72d43dad30cd079
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.489884476Z" level=info msg="Created container 4358b82c8954ff00c750a8e7c797bb0f9e4326b91ea2d2a26f1f41c7d11f898e: kube-system/kube-proxy-jmrq8/kube-proxy" id=fa3de2cf-07b4-43e7-b74c-d9f8d43d69e9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.491571273Z" level=info msg="Starting container: 4358b82c8954ff00c750a8e7c797bb0f9e4326b91ea2d2a26f1f41c7d11f898e" id=7f6f01f2-35bf-4316-a589-c91b4ce51c9d name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:02:56 newest-cni-714993 crio[612]: time="2025-11-21T15:02:56.494808767Z" level=info msg="Started container" PID=1072 containerID=4358b82c8954ff00c750a8e7c797bb0f9e4326b91ea2d2a26f1f41c7d11f898e description=kube-system/kube-proxy-jmrq8/kube-proxy id=7f6f01f2-35bf-4316-a589-c91b4ce51c9d name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e573abf4b8a0bf606d4acec6c128d59d7476edba6117c61b78b242856f50102
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4358b82c8954f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   3e573abf4b8a0       kube-proxy-jmrq8                            kube-system
	91c8c5d635117       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   8 seconds ago       Running             kindnet-cni               1                   d4e19cd090f91       kindnet-jssq6                               kube-system
	2edf2c715c49a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   62df6935e425c       kube-controller-manager-newest-cni-714993   kube-system
	730f5e074ea29       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   2a55f7f2ae218       kube-apiserver-newest-cni-714993            kube-system
	a66dde0c76094       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   a8c32deb16c5f       kube-scheduler-newest-cni-714993            kube-system
	6e6566081c03a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   4ef705610f57f       etcd-newest-cni-714993                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-714993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-714993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=newest-cni-714993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T15_02_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 15:02:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-714993
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 15:02:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 15:02:56 +0000   Fri, 21 Nov 2025 15:02:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 15:02:56 +0000   Fri, 21 Nov 2025 15:02:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 15:02:56 +0000   Fri, 21 Nov 2025 15:02:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 21 Nov 2025 15:02:56 +0000   Fri, 21 Nov 2025 15:02:25 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-714993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                bc9159db-8195-45b8-b93a-134eb7c35db1
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-714993                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-jssq6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-714993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-714993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-jmrq8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-714993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-714993 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-714993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-714993 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-714993 event: Registered Node newest-cni-714993 in Controller
	  Normal   Starting                 15s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-714993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-714993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-714993 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-714993 event: Registered Node newest-cni-714993 in Controller
	
	
	==> dmesg <==
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	[Nov21 14:58] overlayfs: idmapped layers are currently not supported
	[Nov21 14:59] overlayfs: idmapped layers are currently not supported
	[Nov21 15:00] overlayfs: idmapped layers are currently not supported
	[ +13.392744] overlayfs: idmapped layers are currently not supported
	[Nov21 15:01] overlayfs: idmapped layers are currently not supported
	[Nov21 15:02] overlayfs: idmapped layers are currently not supported
	[ +25.555443] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6e6566081c03aa51453671dda29548d283a3156fd32976da87a2e0708b5ca23e] <==
	{"level":"warn","ts":"2025-11-21T15:02:54.171335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.197636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.277143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.304501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.341286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.367639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.385479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.407326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.429410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.452172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.481078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.501005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.521792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.542687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.582083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.589472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.613310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.637898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.658136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.681125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.713953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.731825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.749625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.773948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:54.870922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44660","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:03:04 up  2:45,  0 user,  load average: 5.26, 3.89, 2.98
	Linux newest-cni-714993 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [91c8c5d6351174940402e2f7125bc3576786fa4abc2808b9d5dfc2a6dce40f72] <==
	I1121 15:02:56.508249       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 15:02:56.510450       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 15:02:56.511532       1 main.go:148] setting mtu 1500 for CNI 
	I1121 15:02:56.511603       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 15:02:56.511644       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T15:02:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 15:02:56.800753       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 15:02:56.800783       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 15:02:56.800797       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 15:02:56.800921       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [730f5e074ea29c84ee762fa289c70402bf32780d481aaa7682e731dd9794d540] <==
	I1121 15:02:56.102461       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 15:02:56.102569       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1121 15:02:56.102770       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 15:02:56.126420       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1121 15:02:56.128250       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1121 15:02:56.185010       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 15:02:56.241241       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1121 15:02:56.246561       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1121 15:02:56.246630       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 15:02:56.276513       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 15:02:56.276622       1 aggregator.go:171] initial CRD sync complete...
	I1121 15:02:56.276633       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 15:02:56.276640       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 15:02:56.276646       1 cache.go:39] Caches are synced for autoregister controller
	E1121 15:02:56.364733       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 15:02:56.683277       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 15:02:57.112046       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 15:02:57.348790       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 15:02:57.461309       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 15:02:57.530901       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 15:02:57.768992       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.98.8"}
	I1121 15:02:57.805390       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.229.103"}
	I1121 15:03:00.681132       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 15:03:00.725662       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 15:03:00.744029       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2edf2c715c49a8ea3535cad8175de2b076e3defaf18e6a36a9e7d31008d89625] <==
	I1121 15:03:00.649173       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 15:03:00.649185       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 15:03:00.650050       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 15:03:00.651585       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 15:03:00.651763       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 15:03:00.650129       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 15:03:00.656252       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 15:03:00.658665       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:03:00.668262       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 15:03:00.673238       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 15:03:00.674089       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 15:03:00.676451       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 15:03:00.696450       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 15:03:00.697067       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 15:03:00.697156       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 15:03:00.697300       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 15:03:00.698727       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 15:03:00.699646       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 15:03:00.700102       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-714993"
	I1121 15:03:00.700206       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 15:03:00.699543       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:03:00.699564       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 15:03:00.699576       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 15:03:00.702391       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 15:03:00.703898       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4358b82c8954ff00c750a8e7c797bb0f9e4326b91ea2d2a26f1f41c7d11f898e] <==
	I1121 15:02:57.865055       1 server_linux.go:53] "Using iptables proxy"
	I1121 15:02:58.078126       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 15:02:58.190555       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 15:02:58.190679       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1121 15:02:58.190788       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 15:02:58.300812       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 15:02:58.300961       1 server_linux.go:132] "Using iptables Proxier"
	I1121 15:02:58.324011       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 15:02:58.324525       1 server.go:527] "Version info" version="v1.34.1"
	I1121 15:02:58.324822       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:02:58.326120       1 config.go:200] "Starting service config controller"
	I1121 15:02:58.326189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 15:02:58.326230       1 config.go:106] "Starting endpoint slice config controller"
	I1121 15:02:58.326256       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 15:02:58.326291       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 15:02:58.326317       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 15:02:58.334006       1 config.go:309] "Starting node config controller"
	I1121 15:02:58.334096       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 15:02:58.334130       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 15:02:58.426354       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 15:02:58.426413       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 15:02:58.426461       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a66dde0c760949de4963a031792226b729cb39589bf9b5e48c1f90fc16d85523] <==
	I1121 15:02:54.668205       1 serving.go:386] Generated self-signed cert in-memory
	I1121 15:02:58.173922       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 15:02:58.174057       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:02:58.179320       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 15:02:58.179707       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 15:02:58.179776       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 15:02:58.179856       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 15:02:58.181847       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:02:58.191336       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:02:58.190722       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:02:58.191837       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:02:58.283726       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 15:02:58.294263       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:02:58.294451       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 15:02:52 newest-cni-714993 kubelet[733]: E1121 15:02:52.969950     733 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-714993\" not found" node="newest-cni-714993"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.821725     733 apiserver.go:52] "Watching apiserver"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.834147     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-714993"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.951074     733 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.986976     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/153afdbe-a8ec-43f0-a76c-4b9c81867c6e-xtables-lock\") pod \"kube-proxy-jmrq8\" (UID: \"153afdbe-a8ec-43f0-a76c-4b9c81867c6e\") " pod="kube-system/kube-proxy-jmrq8"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.987017     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/153afdbe-a8ec-43f0-a76c-4b9c81867c6e-lib-modules\") pod \"kube-proxy-jmrq8\" (UID: \"153afdbe-a8ec-43f0-a76c-4b9c81867c6e\") " pod="kube-system/kube-proxy-jmrq8"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.987069     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/da7ab922-ecf7-449c-9aac-481926be6add-cni-cfg\") pod \"kindnet-jssq6\" (UID: \"da7ab922-ecf7-449c-9aac-481926be6add\") " pod="kube-system/kindnet-jssq6"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.987087     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da7ab922-ecf7-449c-9aac-481926be6add-lib-modules\") pod \"kindnet-jssq6\" (UID: \"da7ab922-ecf7-449c-9aac-481926be6add\") " pod="kube-system/kindnet-jssq6"
	Nov 21 15:02:55 newest-cni-714993 kubelet[733]: I1121 15:02:55.987126     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da7ab922-ecf7-449c-9aac-481926be6add-xtables-lock\") pod \"kindnet-jssq6\" (UID: \"da7ab922-ecf7-449c-9aac-481926be6add\") " pod="kube-system/kindnet-jssq6"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.076856     733 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: E1121 15:02:56.255901     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-714993\" already exists" pod="kube-system/etcd-newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.255930     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.323074     733 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: E1121 15:02:56.327067     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-714993\" already exists" pod="kube-system/kube-apiserver-newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.327120     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.327225     733 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.327385     733 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.336036     733 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: W1121 15:02:56.353662     733 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bc5829e976c0118006bc5ee4d9bea5d10810b7de4fdf7292bcd7fb2bdfbeb97a/crio-3e573abf4b8a0bf606d4acec6c128d59d7476edba6117c61b78b242856f50102 WatchSource:0}: Error finding container 3e573abf4b8a0bf606d4acec6c128d59d7476edba6117c61b78b242856f50102: Status 404 returned error can't find the container with id 3e573abf4b8a0bf606d4acec6c128d59d7476edba6117c61b78b242856f50102
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: E1121 15:02:56.371044     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-714993\" already exists" pod="kube-system/kube-controller-manager-newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: I1121 15:02:56.371081     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-714993"
	Nov 21 15:02:56 newest-cni-714993 kubelet[733]: E1121 15:02:56.405777     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-714993\" already exists" pod="kube-system/kube-scheduler-newest-cni-714993"
	Nov 21 15:02:59 newest-cni-714993 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 15:02:59 newest-cni-714993 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 15:02:59 newest-cni-714993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-714993 -n newest-cni-714993
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-714993 -n newest-cni-714993: exit status 2 (383.253327ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-714993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gg7hh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c6w9g kubernetes-dashboard-855c9754f9-p7ftl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-714993 describe pod coredns-66bc5c9577-gg7hh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c6w9g kubernetes-dashboard-855c9754f9-p7ftl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-714993 describe pod coredns-66bc5c9577-gg7hh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c6w9g kubernetes-dashboard-855c9754f9-p7ftl: exit status 1 (90.114539ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gg7hh" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-c6w9g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-p7ftl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-714993 describe pod coredns-66bc5c9577-gg7hh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c6w9g kubernetes-dashboard-855c9754f9-p7ftl: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-124330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-124330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (355.089306ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:03:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-124330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-124330 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-124330 describe deploy/metrics-server -n kube-system: exit status 1 (111.904454ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-124330 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-124330
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-124330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818",
	        "Created": "2025-11-21T15:01:40.035459408Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489600,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T15:01:40.109824504Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/hostname",
	        "HostsPath": "/var/lib/docker/containers/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/hosts",
	        "LogPath": "/var/lib/docker/containers/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818-json.log",
	        "Name": "/default-k8s-diff-port-124330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-124330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-124330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818",
	                "LowerDir": "/var/lib/docker/overlay2/1ac9f699782810d5eb105621fe7efb90837a93f25caf0c55b80a0534d8bc54ae-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1ac9f699782810d5eb105621fe7efb90837a93f25caf0c55b80a0534d8bc54ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1ac9f699782810d5eb105621fe7efb90837a93f25caf0c55b80a0534d8bc54ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1ac9f699782810d5eb105621fe7efb90837a93f25caf0c55b80a0534d8bc54ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-124330",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-124330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-124330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-124330",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-124330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6407dd0ccd197f02bb11d443fcc80feffb07bd76e0b8a234b46c9ba6b571588f",
	            "SandboxKey": "/var/run/docker/netns/6407dd0ccd19",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-124330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:95:db:b3:3a:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "571375adbe67c8114c1253f4d87fb2a0f5ebbd2759db87cf3bcc3311dbadaf5e",
	                    "EndpointID": "1cd8a8bed6817cad29a8645aa210e1c44cf2afbc839e94ba97d0770399a3213d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-124330",
	                        "fad72cd6bedb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-124330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-124330 logs -n 25: (1.654523764s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-844780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ stop    │ -p embed-certs-902161 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-902161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:00 UTC │
	│ start   │ -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:00 UTC │ 21 Nov 25 15:01 UTC │
	│ image   │ no-preload-844780 image list --format=json                                                                                                                                                                                                    │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p no-preload-844780 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:02 UTC │
	│ image   │ embed-certs-902161 image list --format=json                                                                                                                                                                                                   │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p embed-certs-902161 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p embed-certs-902161                                                                                                                                                                                                                         │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p embed-certs-902161                                                                                                                                                                                                                         │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:02 UTC │
	│ addons  │ enable metrics-server -p newest-cni-714993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │                     │
	│ stop    │ -p newest-cni-714993 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-714993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ start   │ -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ image   │ newest-cni-714993 image list --format=json                                                                                                                                                                                                    │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ pause   │ -p newest-cni-714993 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │                     │
	│ delete  │ -p newest-cni-714993                                                                                                                                                                                                                          │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:03 UTC │
	│ delete  │ -p newest-cni-714993                                                                                                                                                                                                                          │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:03 UTC │
	│ start   │ -p auto-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-609503                  │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 15:03:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 15:03:08.103215  499267 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:03:08.103617  499267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:03:08.103632  499267 out.go:374] Setting ErrFile to fd 2...
	I1121 15:03:08.103639  499267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:03:08.103945  499267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:03:08.104492  499267 out.go:368] Setting JSON to false
	I1121 15:03:08.105484  499267 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9940,"bootTime":1763727448,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 15:03:08.105554  499267 start.go:143] virtualization:  
	I1121 15:03:08.109791  499267 out.go:179] * [auto-609503] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 15:03:08.113671  499267 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 15:03:08.113806  499267 notify.go:221] Checking for updates...
	I1121 15:03:08.119856  499267 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 15:03:08.123016  499267 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:03:08.125952  499267 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 15:03:08.128887  499267 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 15:03:08.131852  499267 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 15:03:08.135379  499267 config.go:182] Loaded profile config "default-k8s-diff-port-124330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:03:08.135500  499267 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 15:03:08.161852  499267 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 15:03:08.162040  499267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:03:08.234986  499267 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 15:03:08.225610655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:03:08.235095  499267 docker.go:319] overlay module found
	I1121 15:03:08.238236  499267 out.go:179] * Using the docker driver based on user configuration
	I1121 15:03:08.241782  499267 start.go:309] selected driver: docker
	I1121 15:03:08.241805  499267 start.go:930] validating driver "docker" against <nil>
	I1121 15:03:08.241820  499267 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 15:03:08.242621  499267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:03:08.305578  499267 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 15:03:08.294113965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:03:08.305763  499267 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 15:03:08.306563  499267 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:03:08.309551  499267 out.go:179] * Using Docker driver with root privileges
	I1121 15:03:08.312407  499267 cni.go:84] Creating CNI manager for ""
	I1121 15:03:08.312476  499267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:03:08.312488  499267 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 15:03:08.312576  499267 start.go:353] cluster config:
	{Name:auto-609503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-609503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1121 15:03:08.317476  499267 out.go:179] * Starting "auto-609503" primary control-plane node in "auto-609503" cluster
	I1121 15:03:08.320415  499267 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 15:03:08.323504  499267 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 15:03:08.326442  499267 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:03:08.326496  499267 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 15:03:08.326506  499267 cache.go:65] Caching tarball of preloaded images
	I1121 15:03:08.326533  499267 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 15:03:08.326594  499267 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 15:03:08.326616  499267 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 15:03:08.326727  499267 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/config.json ...
	I1121 15:03:08.326751  499267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/config.json: {Name:mk1b3bf6e347450b854355eb9f51f086d99f5f45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:08.348784  499267 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 15:03:08.348810  499267 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 15:03:08.348824  499267 cache.go:243] Successfully downloaded all kic artifacts
	I1121 15:03:08.348847  499267 start.go:360] acquireMachinesLock for auto-609503: {Name:mk26000e44d3f320ceb57a927fe2455eb841c191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 15:03:08.348951  499267 start.go:364] duration metric: took 84.457µs to acquireMachinesLock for "auto-609503"
	I1121 15:03:08.348983  499267 start.go:93] Provisioning new machine with config: &{Name:auto-609503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-609503 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:03:08.349069  499267 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 21 15:02:57 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:02:57.455412428Z" level=info msg="Created container aee0b53b26f93c037e0f22ea74543d924e8ca396d28096196bafe07e9d84b3e3: kube-system/coredns-66bc5c9577-zhrs7/coredns" id=9aebbc19-ba33-4459-be48-43b79b347be8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:02:57 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:02:57.456378388Z" level=info msg="Starting container: aee0b53b26f93c037e0f22ea74543d924e8ca396d28096196bafe07e9d84b3e3" id=a245c6f0-ac39-4754-8db0-4c1639d7c7ca name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:02:57 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:02:57.458882852Z" level=info msg="Started container" PID=1741 containerID=aee0b53b26f93c037e0f22ea74543d924e8ca396d28096196bafe07e9d84b3e3 description=kube-system/coredns-66bc5c9577-zhrs7/coredns id=a245c6f0-ac39-4754-8db0-4c1639d7c7ca name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8b59a97d78f2914193810caeba8c1c005b299f59eebc863b224d61a5a256034
	Nov 21 15:03:00 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:00.452464226Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3cb4a4ee-3f9f-4df2-a4ea-43d4c2e6c3e8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:03:00 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:00.452995515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:03:00 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:00.499995019Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4f55822b8b6ccb215895f8262db14ac626dfa70035153b8f4bd2330b8b64bcbe UID:878764d7-809e-440c-a237-6313950ee921 NetNS:/var/run/netns/986a0998-1462-4560-ab00-98485838b828 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000a20e68}] Aliases:map[]}"
	Nov 21 15:03:00 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:00.500049944Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 21 15:03:00 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:00.590018013Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4f55822b8b6ccb215895f8262db14ac626dfa70035153b8f4bd2330b8b64bcbe UID:878764d7-809e-440c-a237-6313950ee921 NetNS:/var/run/netns/986a0998-1462-4560-ab00-98485838b828 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000a20e68}] Aliases:map[]}"
	Nov 21 15:03:00 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:00.590383662Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 21 15:03:00 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:00.594459049Z" level=info msg="Ran pod sandbox 4f55822b8b6ccb215895f8262db14ac626dfa70035153b8f4bd2330b8b64bcbe with infra container: default/busybox/POD" id=3cb4a4ee-3f9f-4df2-a4ea-43d4c2e6c3e8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 15:03:00 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:00.600538258Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c1a97b00-9e44-45f4-8c73-1e850b465243 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:03:00 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:00.600827221Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c1a97b00-9e44-45f4-8c73-1e850b465243 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:03:00 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:00.600987059Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c1a97b00-9e44-45f4-8c73-1e850b465243 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:03:00 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:00.607414843Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd604e14-54fe-434a-9a3f-d71eabce79cc name=/runtime.v1.ImageService/PullImage
	Nov 21 15:03:00 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:00.610664505Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 15:03:02 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:02.757064381Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=dd604e14-54fe-434a-9a3f-d71eabce79cc name=/runtime.v1.ImageService/PullImage
	Nov 21 15:03:02 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:02.75823406Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b7ef2eb6-a8fb-4653-b419-fd27afaca14b name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:03:02 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:02.761041395Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3e3e3210-6f71-4333-95dd-10bb7cbdca96 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 15:03:02 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:02.770493312Z" level=info msg="Creating container: default/busybox/busybox" id=67801dcb-3de9-42e9-a1f2-37d9db636d25 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:03:02 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:02.770763469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:03:02 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:02.779245258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:03:02 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:02.779769424Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:03:02 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:02.802121667Z" level=info msg="Created container 62e46a6b0dea748f1699831fe9dcab5958ca07dd99ce514f186d255f290952af: default/busybox/busybox" id=67801dcb-3de9-42e9-a1f2-37d9db636d25 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:03:02 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:02.803113433Z" level=info msg="Starting container: 62e46a6b0dea748f1699831fe9dcab5958ca07dd99ce514f186d255f290952af" id=c659aef7-ff33-483e-994c-76d9728a4714 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:03:02 default-k8s-diff-port-124330 crio[838]: time="2025-11-21T15:03:02.807893542Z" level=info msg="Started container" PID=1798 containerID=62e46a6b0dea748f1699831fe9dcab5958ca07dd99ce514f186d255f290952af description=default/busybox/busybox id=c659aef7-ff33-483e-994c-76d9728a4714 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f55822b8b6ccb215895f8262db14ac626dfa70035153b8f4bd2330b8b64bcbe
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	62e46a6b0dea7       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   4f55822b8b6cc       busybox                                                default
	aee0b53b26f93       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   e8b59a97d78f2       coredns-66bc5c9577-zhrs7                               kube-system
	f45831e278f6d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   ebee32b9ec7aa       storage-provisioner                                    kube-system
	94c511f69f105       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   b88fdf1a54a74       kindnet-wdpnm                                          kube-system
	390d6f61aeb6f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   9efb82553f443       kube-proxy-fr5df                                       kube-system
	d79b822234f77       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   a1471041e866d       kube-apiserver-default-k8s-diff-port-124330            kube-system
	ca581db154cac       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   5635a78c1fe0b       etcd-default-k8s-diff-port-124330                      kube-system
	338d121e03a6e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   5931ab81761f7       kube-scheduler-default-k8s-diff-port-124330            kube-system
	5a7081c973f16       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   f08ea29fae863       kube-controller-manager-default-k8s-diff-port-124330   kube-system
	
	
	==> coredns [aee0b53b26f93c037e0f22ea74543d924e8ca396d28096196bafe07e9d84b3e3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46124 - 31185 "HINFO IN 976375061613658909.5351639597496101089. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013218722s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-124330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-124330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=default-k8s-diff-port-124330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T15_02_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 15:02:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-124330
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 15:03:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 15:02:56 +0000   Fri, 21 Nov 2025 15:02:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 15:02:56 +0000   Fri, 21 Nov 2025 15:02:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 15:02:56 +0000   Fri, 21 Nov 2025 15:02:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 15:02:56 +0000   Fri, 21 Nov 2025 15:02:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-124330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                6a639b89-86eb-4814-8ac4-d429830f770c
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-zhrs7                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-124330                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-wdpnm                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-124330             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-124330    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-fr5df                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-124330             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientMemory  71s (x8 over 72s)  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 72s)  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 72s)  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-124330 event: Registered Node default-k8s-diff-port-124330 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-124330 status is now: NodeReady
	
	
	==> dmesg <==
	[ +27.017471] overlayfs: idmapped layers are currently not supported
	[Nov21 14:40] overlayfs: idmapped layers are currently not supported
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	[Nov21 14:58] overlayfs: idmapped layers are currently not supported
	[Nov21 14:59] overlayfs: idmapped layers are currently not supported
	[Nov21 15:00] overlayfs: idmapped layers are currently not supported
	[ +13.392744] overlayfs: idmapped layers are currently not supported
	[Nov21 15:01] overlayfs: idmapped layers are currently not supported
	[Nov21 15:02] overlayfs: idmapped layers are currently not supported
	[ +25.555443] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ca581db154cacf143092d37c838322ff82176b7c38829d83fbe1f879bd522b74] <==
	{"level":"warn","ts":"2025-11-21T15:02:04.022696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.081131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.126305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.136639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.154780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.177580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.213297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.224057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.236712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.262539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.280546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.292920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.315963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.333160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.352098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.370576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.388843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.402217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.426615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.437160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.452177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.487355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.497602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.517348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:02:04.620964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32810","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:03:11 up  2:45,  0 user,  load average: 5.56, 3.98, 3.01
	Linux default-k8s-diff-port-124330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [94c511f69f10557c4df26a9781426db48d4c87db0de9e0d8f823f9e6a6ecb52d] <==
	I1121 15:02:16.550605       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 15:02:16.550892       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 15:02:16.551175       1 main.go:148] setting mtu 1500 for CNI 
	I1121 15:02:16.551237       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 15:02:16.551277       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T15:02:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 15:02:16.766204       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 15:02:16.766230       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 15:02:16.766239       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 15:02:16.766931       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 15:02:46.766964       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 15:02:46.767161       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 15:02:46.767245       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 15:02:46.767316       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1121 15:02:47.967203       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 15:02:47.967320       1 metrics.go:72] Registering metrics
	I1121 15:02:47.967445       1 controller.go:711] "Syncing nftables rules"
	I1121 15:02:56.772440       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 15:02:56.772558       1 main.go:301] handling current node
	I1121 15:03:06.766401       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 15:03:06.766433       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d79b822234f7778a26fedbda120ccd7081e2f293525cad2c9982f0779d24ddc4] <==
	I1121 15:02:06.393998       1 policy_source.go:240] refreshing policies
	I1121 15:02:06.395445       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 15:02:06.407672       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 15:02:06.451809       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 15:02:06.465516       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 15:02:06.466603       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 15:02:06.509613       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 15:02:06.509724       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 15:02:06.661833       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 15:02:06.704610       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 15:02:06.704630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 15:02:08.417843       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 15:02:08.507116       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 15:02:08.703650       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 15:02:08.714325       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 15:02:08.715970       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 15:02:08.723675       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 15:02:09.390172       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 15:02:09.930688       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 15:02:09.955629       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 15:02:09.981292       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 15:02:14.952956       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 15:02:14.964654       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 15:02:15.451395       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 15:02:15.568156       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5a7081c973f163419a45acdc2b6b6e2f8b20bae90c3c2a8960cfb066e8b774af] <==
	I1121 15:02:14.588745       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-124330"
	I1121 15:02:14.588920       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 15:02:14.589032       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 15:02:14.590035       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 15:02:14.591495       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 15:02:14.592017       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 15:02:14.597726       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 15:02:14.635774       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 15:02:14.636057       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 15:02:14.636347       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 15:02:14.645956       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 15:02:14.646052       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 15:02:14.646130       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 15:02:14.646427       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 15:02:14.675115       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 15:02:14.675218       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:02:14.708745       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 15:02:14.736854       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:02:14.740833       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 15:02:14.753602       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 15:02:14.856342       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:02:14.886926       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:02:14.887021       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 15:02:14.887051       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 15:02:59.600180       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [390d6f61aeb6fb3e5ef8a970bccbc5856b27c1683abcf7621b484bd0a864563b] <==
	I1121 15:02:16.582171       1 server_linux.go:53] "Using iptables proxy"
	I1121 15:02:16.834276       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 15:02:16.936040       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 15:02:16.936075       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 15:02:16.936142       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 15:02:17.053643       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 15:02:17.053697       1 server_linux.go:132] "Using iptables Proxier"
	I1121 15:02:17.080105       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 15:02:17.099044       1 server.go:527] "Version info" version="v1.34.1"
	I1121 15:02:17.099072       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:02:17.100624       1 config.go:200] "Starting service config controller"
	I1121 15:02:17.100637       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 15:02:17.100654       1 config.go:106] "Starting endpoint slice config controller"
	I1121 15:02:17.100658       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 15:02:17.100694       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 15:02:17.100700       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 15:02:17.118737       1 config.go:309] "Starting node config controller"
	I1121 15:02:17.118754       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 15:02:17.118762       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 15:02:17.206311       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 15:02:17.206359       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 15:02:17.206398       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [338d121e03a6eea4e42b4a0e63d3483a8f9f3bf28b0b1f015954249cc209826e] <==
	I1121 15:02:06.266580       1 serving.go:386] Generated self-signed cert in-memory
	I1121 15:02:08.990193       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 15:02:08.990329       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:02:08.997951       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 15:02:08.998158       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 15:02:08.998221       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 15:02:08.998286       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 15:02:09.001730       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:02:09.008103       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:02:09.008030       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:02:09.008558       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:02:09.098603       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 15:02:09.109397       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:02:09.109474       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 21 15:02:11 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:11.431756    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-124330" podStartSLOduration=1.431744044 podStartE2EDuration="1.431744044s" podCreationTimestamp="2025-11-21 15:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:02:11.412042604 +0000 UTC m=+1.540095933" watchObservedRunningTime="2025-11-21 15:02:11.431744044 +0000 UTC m=+1.559797389"
	Nov 21 15:02:11 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:11.463184    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-124330" podStartSLOduration=1.463163963 podStartE2EDuration="1.463163963s" podCreationTimestamp="2025-11-21 15:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:02:11.432026123 +0000 UTC m=+1.560079444" watchObservedRunningTime="2025-11-21 15:02:11.463163963 +0000 UTC m=+1.591217284"
	Nov 21 15:02:14 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:14.681252    1307 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 15:02:14 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:14.683041    1307 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 15:02:15 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:15.569613    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/968146ae-c634-4d71-88d9-dd180b847494-kube-proxy\") pod \"kube-proxy-fr5df\" (UID: \"968146ae-c634-4d71-88d9-dd180b847494\") " pod="kube-system/kube-proxy-fr5df"
	Nov 21 15:02:15 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:15.569650    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/968146ae-c634-4d71-88d9-dd180b847494-xtables-lock\") pod \"kube-proxy-fr5df\" (UID: \"968146ae-c634-4d71-88d9-dd180b847494\") " pod="kube-system/kube-proxy-fr5df"
	Nov 21 15:02:15 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:15.569681    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/968146ae-c634-4d71-88d9-dd180b847494-lib-modules\") pod \"kube-proxy-fr5df\" (UID: \"968146ae-c634-4d71-88d9-dd180b847494\") " pod="kube-system/kube-proxy-fr5df"
	Nov 21 15:02:15 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:15.681839    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cd86\" (UniqueName: \"kubernetes.io/projected/968146ae-c634-4d71-88d9-dd180b847494-kube-api-access-5cd86\") pod \"kube-proxy-fr5df\" (UID: \"968146ae-c634-4d71-88d9-dd180b847494\") " pod="kube-system/kube-proxy-fr5df"
	Nov 21 15:02:15 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:15.784822    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8808169a-c3a4-4b7c-8703-356c5678bb6a-cni-cfg\") pod \"kindnet-wdpnm\" (UID: \"8808169a-c3a4-4b7c-8703-356c5678bb6a\") " pod="kube-system/kindnet-wdpnm"
	Nov 21 15:02:15 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:15.784875    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8808169a-c3a4-4b7c-8703-356c5678bb6a-lib-modules\") pod \"kindnet-wdpnm\" (UID: \"8808169a-c3a4-4b7c-8703-356c5678bb6a\") " pod="kube-system/kindnet-wdpnm"
	Nov 21 15:02:15 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:15.784897    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2gpp\" (UniqueName: \"kubernetes.io/projected/8808169a-c3a4-4b7c-8703-356c5678bb6a-kube-api-access-r2gpp\") pod \"kindnet-wdpnm\" (UID: \"8808169a-c3a4-4b7c-8703-356c5678bb6a\") " pod="kube-system/kindnet-wdpnm"
	Nov 21 15:02:15 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:15.784934    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8808169a-c3a4-4b7c-8703-356c5678bb6a-xtables-lock\") pod \"kindnet-wdpnm\" (UID: \"8808169a-c3a4-4b7c-8703-356c5678bb6a\") " pod="kube-system/kindnet-wdpnm"
	Nov 21 15:02:15 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:15.860563    1307 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 21 15:02:16 default-k8s-diff-port-124330 kubelet[1307]: W1121 15:02:16.230302    1307 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/crio-b88fdf1a54a742b9941c8fed562b91adf79408e2ce22972630ac8bac487f2c61 WatchSource:0}: Error finding container b88fdf1a54a742b9941c8fed562b91adf79408e2ce22972630ac8bac487f2c61: Status 404 returned error can't find the container with id b88fdf1a54a742b9941c8fed562b91adf79408e2ce22972630ac8bac487f2c61
	Nov 21 15:02:17 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:17.535416    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fr5df" podStartSLOduration=2.535397138 podStartE2EDuration="2.535397138s" podCreationTimestamp="2025-11-21 15:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:02:16.687992817 +0000 UTC m=+6.816046146" watchObservedRunningTime="2025-11-21 15:02:17.535397138 +0000 UTC m=+7.663450483"
	Nov 21 15:02:20 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:20.333303    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wdpnm" podStartSLOduration=5.333282122 podStartE2EDuration="5.333282122s" podCreationTimestamp="2025-11-21 15:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:02:17.535862817 +0000 UTC m=+7.663916138" watchObservedRunningTime="2025-11-21 15:02:20.333282122 +0000 UTC m=+10.461335443"
	Nov 21 15:02:56 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:56.891696    1307 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 15:02:57 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:57.031548    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdkf9\" (UniqueName: \"kubernetes.io/projected/72853767-c110-4974-813d-a43eb4ea90a6-kube-api-access-jdkf9\") pod \"storage-provisioner\" (UID: \"72853767-c110-4974-813d-a43eb4ea90a6\") " pod="kube-system/storage-provisioner"
	Nov 21 15:02:57 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:57.031789    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jsxs\" (UniqueName: \"kubernetes.io/projected/6d450543-7e6c-43d8-93ac-9ceca2afe29a-kube-api-access-9jsxs\") pod \"coredns-66bc5c9577-zhrs7\" (UID: \"6d450543-7e6c-43d8-93ac-9ceca2afe29a\") " pod="kube-system/coredns-66bc5c9577-zhrs7"
	Nov 21 15:02:57 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:57.031886    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/72853767-c110-4974-813d-a43eb4ea90a6-tmp\") pod \"storage-provisioner\" (UID: \"72853767-c110-4974-813d-a43eb4ea90a6\") " pod="kube-system/storage-provisioner"
	Nov 21 15:02:57 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:57.031968    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d450543-7e6c-43d8-93ac-9ceca2afe29a-config-volume\") pod \"coredns-66bc5c9577-zhrs7\" (UID: \"6d450543-7e6c-43d8-93ac-9ceca2afe29a\") " pod="kube-system/coredns-66bc5c9577-zhrs7"
	Nov 21 15:02:57 default-k8s-diff-port-124330 kubelet[1307]: W1121 15:02:57.371375    1307 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/crio-e8b59a97d78f2914193810caeba8c1c005b299f59eebc863b224d61a5a256034 WatchSource:0}: Error finding container e8b59a97d78f2914193810caeba8c1c005b299f59eebc863b224d61a5a256034: Status 404 returned error can't find the container with id e8b59a97d78f2914193810caeba8c1c005b299f59eebc863b224d61a5a256034
	Nov 21 15:02:57 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:57.653268    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zhrs7" podStartSLOduration=42.653249877 podStartE2EDuration="42.653249877s" podCreationTimestamp="2025-11-21 15:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:02:57.62504644 +0000 UTC m=+47.753099777" watchObservedRunningTime="2025-11-21 15:02:57.653249877 +0000 UTC m=+47.781303198"
	Nov 21 15:02:59 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:59.834359    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.834338901 podStartE2EDuration="42.834338901s" podCreationTimestamp="2025-11-21 15:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:02:57.693266662 +0000 UTC m=+47.821319991" watchObservedRunningTime="2025-11-21 15:02:59.834338901 +0000 UTC m=+49.962392222"
	Nov 21 15:02:59 default-k8s-diff-port-124330 kubelet[1307]: I1121 15:02:59.953167    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4d6v\" (UniqueName: \"kubernetes.io/projected/878764d7-809e-440c-a237-6313950ee921-kube-api-access-q4d6v\") pod \"busybox\" (UID: \"878764d7-809e-440c-a237-6313950ee921\") " pod="default/busybox"
	
	
	==> storage-provisioner [f45831e278f6d48591449d109491e4bde5d541974b3ea4d9201376e9ce9f7da8] <==
	I1121 15:02:57.442152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 15:02:57.486228       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 15:02:57.486308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 15:02:57.502513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:02:57.515606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:02:57.515859       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 15:02:57.516080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-124330_ab72590a-de6f-4ec9-9930-d603753d0217!
	I1121 15:02:57.517147       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf0e76b3-7a61-453e-ad8b-291e224c4abe", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-124330_ab72590a-de6f-4ec9-9930-d603753d0217 became leader
	W1121 15:02:57.569535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:02:57.578215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:02:57.618212       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-124330_ab72590a-de6f-4ec9-9930-d603753d0217!
	W1121 15:02:59.583058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:02:59.588307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:03:01.591289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:03:01.598707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:03:03.602636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:03:03.610107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:03:05.613036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:03:05.621941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:03:07.625669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:03:07.631608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:03:09.634678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:03:09.656036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-124330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-124330 --alsologtostderr -v=1
E1121 15:04:41.637300  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-124330 --alsologtostderr -v=1: exit status 80 (2.415845583s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-124330 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 15:04:39.799657  504941 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:04:39.800011  504941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:04:39.800019  504941 out.go:374] Setting ErrFile to fd 2...
	I1121 15:04:39.800023  504941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:04:39.800283  504941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:04:39.800616  504941 out.go:368] Setting JSON to false
	I1121 15:04:39.800638  504941 mustload.go:66] Loading cluster: default-k8s-diff-port-124330
	I1121 15:04:39.801029  504941 config.go:182] Loaded profile config "default-k8s-diff-port-124330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:04:39.801486  504941 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:04:39.828926  504941 host.go:66] Checking if "default-k8s-diff-port-124330" exists ...
	I1121 15:04:39.829246  504941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:04:39.931519  504941 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 15:04:39.920332641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:04:39.932379  504941 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-124330 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1121 15:04:39.939138  504941 out.go:179] * Pausing node default-k8s-diff-port-124330 ... 
	I1121 15:04:39.946292  504941 host.go:66] Checking if "default-k8s-diff-port-124330" exists ...
	I1121 15:04:39.946733  504941 ssh_runner.go:195] Run: systemctl --version
	I1121 15:04:39.946790  504941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:04:39.973802  504941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:04:40.092638  504941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:04:40.109073  504941 pause.go:52] kubelet running: true
	I1121 15:04:40.109147  504941 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 15:04:40.464787  504941 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 15:04:40.464882  504941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 15:04:40.601331  504941 cri.go:89] found id: "7b937ecab0a241d292f6754bcbd211657f52de9ee8071744759c18c71945d0db"
	I1121 15:04:40.601355  504941 cri.go:89] found id: "5b113bd590fba8ccf5296efcebfb784b1cf6e565590f72d8fad43cb29967ffdc"
	I1121 15:04:40.601360  504941 cri.go:89] found id: "c883b96946c233f38d0c8644abe895f3d621a5ad142233e77120f6b5eda51757"
	I1121 15:04:40.601364  504941 cri.go:89] found id: "5f0e5e46cc63025dfbd9f042185466d845b7c03f4a63e54afd5bb50b59c9f815"
	I1121 15:04:40.601368  504941 cri.go:89] found id: "8542c4d9705bcfbb9ccfc9cee884439ff94461901253f047e84e29acf9b7621e"
	I1121 15:04:40.601371  504941 cri.go:89] found id: "ee9ac53aba59fc1e496aea56983c3d0c392cff161ea0a9c80336aaf6a3bb18d1"
	I1121 15:04:40.601374  504941 cri.go:89] found id: "f3450da3d6505714a2ddbd0849055e0c303889ab8fbf96ab66e5fb100167b3d0"
	I1121 15:04:40.601378  504941 cri.go:89] found id: "c627117ae55976c3bd9490f6441736eebc7000d2c50a16ac0fbd1824c9604beb"
	I1121 15:04:40.601381  504941 cri.go:89] found id: "8812c413c9de68d93c0162764f45b3d55f29007bce2646ce2fb79c02a7766a43"
	I1121 15:04:40.601388  504941 cri.go:89] found id: "0df3c2682375bee0e258205c092887bac1830cecbcd0d95b1924532bfaa5484f"
	I1121 15:04:40.601391  504941 cri.go:89] found id: "0328bfae2ad66c3fc1fcf7d24675c343d9b2c56f62faf1eb3ba8350ce1788d93"
	I1121 15:04:40.601394  504941 cri.go:89] found id: ""
	I1121 15:04:40.601463  504941 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 15:04:40.622771  504941 retry.go:31] will retry after 303.63946ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:04:40Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:04:40.927265  504941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:04:40.941594  504941 pause.go:52] kubelet running: false
	I1121 15:04:40.941662  504941 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 15:04:41.179445  504941 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 15:04:41.179585  504941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 15:04:41.291247  504941 cri.go:89] found id: "7b937ecab0a241d292f6754bcbd211657f52de9ee8071744759c18c71945d0db"
	I1121 15:04:41.291267  504941 cri.go:89] found id: "5b113bd590fba8ccf5296efcebfb784b1cf6e565590f72d8fad43cb29967ffdc"
	I1121 15:04:41.291271  504941 cri.go:89] found id: "c883b96946c233f38d0c8644abe895f3d621a5ad142233e77120f6b5eda51757"
	I1121 15:04:41.291275  504941 cri.go:89] found id: "5f0e5e46cc63025dfbd9f042185466d845b7c03f4a63e54afd5bb50b59c9f815"
	I1121 15:04:41.291278  504941 cri.go:89] found id: "8542c4d9705bcfbb9ccfc9cee884439ff94461901253f047e84e29acf9b7621e"
	I1121 15:04:41.291282  504941 cri.go:89] found id: "ee9ac53aba59fc1e496aea56983c3d0c392cff161ea0a9c80336aaf6a3bb18d1"
	I1121 15:04:41.291285  504941 cri.go:89] found id: "f3450da3d6505714a2ddbd0849055e0c303889ab8fbf96ab66e5fb100167b3d0"
	I1121 15:04:41.291288  504941 cri.go:89] found id: "c627117ae55976c3bd9490f6441736eebc7000d2c50a16ac0fbd1824c9604beb"
	I1121 15:04:41.291291  504941 cri.go:89] found id: "8812c413c9de68d93c0162764f45b3d55f29007bce2646ce2fb79c02a7766a43"
	I1121 15:04:41.291297  504941 cri.go:89] found id: "0df3c2682375bee0e258205c092887bac1830cecbcd0d95b1924532bfaa5484f"
	I1121 15:04:41.291301  504941 cri.go:89] found id: "0328bfae2ad66c3fc1fcf7d24675c343d9b2c56f62faf1eb3ba8350ce1788d93"
	I1121 15:04:41.291303  504941 cri.go:89] found id: ""
	I1121 15:04:41.291359  504941 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 15:04:41.307455  504941 retry.go:31] will retry after 517.089293ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:04:41Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:04:41.824805  504941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:04:41.838025  504941 pause.go:52] kubelet running: false
	I1121 15:04:41.838090  504941 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 15:04:42.015754  504941 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 15:04:42.015839  504941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 15:04:42.099661  504941 cri.go:89] found id: "7b937ecab0a241d292f6754bcbd211657f52de9ee8071744759c18c71945d0db"
	I1121 15:04:42.099747  504941 cri.go:89] found id: "5b113bd590fba8ccf5296efcebfb784b1cf6e565590f72d8fad43cb29967ffdc"
	I1121 15:04:42.099770  504941 cri.go:89] found id: "c883b96946c233f38d0c8644abe895f3d621a5ad142233e77120f6b5eda51757"
	I1121 15:04:42.099802  504941 cri.go:89] found id: "5f0e5e46cc63025dfbd9f042185466d845b7c03f4a63e54afd5bb50b59c9f815"
	I1121 15:04:42.099821  504941 cri.go:89] found id: "8542c4d9705bcfbb9ccfc9cee884439ff94461901253f047e84e29acf9b7621e"
	I1121 15:04:42.099826  504941 cri.go:89] found id: "ee9ac53aba59fc1e496aea56983c3d0c392cff161ea0a9c80336aaf6a3bb18d1"
	I1121 15:04:42.099832  504941 cri.go:89] found id: "f3450da3d6505714a2ddbd0849055e0c303889ab8fbf96ab66e5fb100167b3d0"
	I1121 15:04:42.099835  504941 cri.go:89] found id: "c627117ae55976c3bd9490f6441736eebc7000d2c50a16ac0fbd1824c9604beb"
	I1121 15:04:42.099838  504941 cri.go:89] found id: "8812c413c9de68d93c0162764f45b3d55f29007bce2646ce2fb79c02a7766a43"
	I1121 15:04:42.099846  504941 cri.go:89] found id: "0df3c2682375bee0e258205c092887bac1830cecbcd0d95b1924532bfaa5484f"
	I1121 15:04:42.099850  504941 cri.go:89] found id: "0328bfae2ad66c3fc1fcf7d24675c343d9b2c56f62faf1eb3ba8350ce1788d93"
	I1121 15:04:42.099854  504941 cri.go:89] found id: ""
	I1121 15:04:42.099933  504941 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 15:04:42.118047  504941 out.go:203] 
	W1121 15:04:42.121091  504941 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:04:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:04:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 15:04:42.121117  504941 out.go:285] * 
	* 
	W1121 15:04:42.128449  504941 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 15:04:42.131964  504941 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-124330 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-124330
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-124330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818",
	        "Created": "2025-11-21T15:01:40.035459408Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501969,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T15:03:25.142970088Z",
	            "FinishedAt": "2025-11-21T15:03:24.0338101Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/hostname",
	        "HostsPath": "/var/lib/docker/containers/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/hosts",
	        "LogPath": "/var/lib/docker/containers/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818-json.log",
	        "Name": "/default-k8s-diff-port-124330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-124330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-124330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818",
	                "LowerDir": "/var/lib/docker/overlay2/1ac9f699782810d5eb105621fe7efb90837a93f25caf0c55b80a0534d8bc54ae-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1ac9f699782810d5eb105621fe7efb90837a93f25caf0c55b80a0534d8bc54ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1ac9f699782810d5eb105621fe7efb90837a93f25caf0c55b80a0534d8bc54ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1ac9f699782810d5eb105621fe7efb90837a93f25caf0c55b80a0534d8bc54ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-124330",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-124330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-124330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-124330",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-124330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10238c7f745c38df13197f304733401acc849a4a63d1bdb26f6964f39fbda4b9",
	            "SandboxKey": "/var/run/docker/netns/10238c7f745c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-124330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:47:2a:f2:6f:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "571375adbe67c8114c1253f4d87fb2a0f5ebbd2759db87cf3bcc3311dbadaf5e",
	                    "EndpointID": "75ae195c29d83c551553163a039646574b4b1c3caa2f1806dc5c5d0776dfd859",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-124330",
	                        "fad72cd6bedb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330: exit status 2 (366.809581ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-124330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-124330 logs -n 25: (1.334268506s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-844780 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:02 UTC │
	│ image   │ embed-certs-902161 image list --format=json                                                                                                                                                                                                   │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p embed-certs-902161 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p embed-certs-902161                                                                                                                                                                                                                         │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p embed-certs-902161                                                                                                                                                                                                                         │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:02 UTC │
	│ addons  │ enable metrics-server -p newest-cni-714993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │                     │
	│ stop    │ -p newest-cni-714993 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-714993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ start   │ -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ image   │ newest-cni-714993 image list --format=json                                                                                                                                                                                                    │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ pause   │ -p newest-cni-714993 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │                     │
	│ delete  │ -p newest-cni-714993                                                                                                                                                                                                                          │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:03 UTC │
	│ delete  │ -p newest-cni-714993                                                                                                                                                                                                                          │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:03 UTC │
	│ start   │ -p auto-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-609503                  │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:04 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-124330 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:03 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:03 UTC │
	│ start   │ -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:04 UTC │
	│ ssh     │ -p auto-609503 pgrep -a kubelet                                                                                                                                                                                                               │ auto-609503                  │ jenkins │ v1.37.0 │ 21 Nov 25 15:04 UTC │ 21 Nov 25 15:04 UTC │
	│ image   │ default-k8s-diff-port-124330 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:04 UTC │ 21 Nov 25 15:04 UTC │
	│ pause   │ -p default-k8s-diff-port-124330 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 15:03:24
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 15:03:24.724792  501835 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:03:24.725093  501835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:03:24.725134  501835 out.go:374] Setting ErrFile to fd 2...
	I1121 15:03:24.725154  501835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:03:24.725565  501835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:03:24.726118  501835 out.go:368] Setting JSON to false
	I1121 15:03:24.727469  501835 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9957,"bootTime":1763727448,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 15:03:24.727589  501835 start.go:143] virtualization:  
	I1121 15:03:24.733173  501835 out.go:179] * [default-k8s-diff-port-124330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 15:03:24.736546  501835 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 15:03:24.736636  501835 notify.go:221] Checking for updates...
	I1121 15:03:24.741745  501835 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 15:03:24.744724  501835 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:03:24.747571  501835 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 15:03:24.750402  501835 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 15:03:24.753352  501835 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 15:03:24.756634  501835 config.go:182] Loaded profile config "default-k8s-diff-port-124330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:03:24.757217  501835 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 15:03:24.794959  501835 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 15:03:24.795142  501835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:03:24.907091  501835 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 15:03:24.879378898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:03:24.907192  501835 docker.go:319] overlay module found
	I1121 15:03:24.910231  501835 out.go:179] * Using the docker driver based on existing profile
	I1121 15:03:24.913141  501835 start.go:309] selected driver: docker
	I1121 15:03:24.913158  501835 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-124330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:03:24.913252  501835 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 15:03:24.913962  501835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:03:25.012824  501835 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 15:03:24.995408534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:03:25.013201  501835 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:03:25.013228  501835 cni.go:84] Creating CNI manager for ""
	I1121 15:03:25.013282  501835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:03:25.013320  501835 start.go:353] cluster config:
	{Name:default-k8s-diff-port-124330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:03:25.016535  501835 out.go:179] * Starting "default-k8s-diff-port-124330" primary control-plane node in "default-k8s-diff-port-124330" cluster
	I1121 15:03:25.019314  501835 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 15:03:25.022311  501835 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 15:03:25.025114  501835 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:03:25.025176  501835 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 15:03:25.025199  501835 cache.go:65] Caching tarball of preloaded images
	I1121 15:03:25.025290  501835 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 15:03:25.025301  501835 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 15:03:25.025415  501835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/config.json ...
	I1121 15:03:25.025659  501835 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 15:03:25.059732  501835 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 15:03:25.059752  501835 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 15:03:25.059764  501835 cache.go:243] Successfully downloaded all kic artifacts
	I1121 15:03:25.059786  501835 start.go:360] acquireMachinesLock for default-k8s-diff-port-124330: {Name:mk8c422fee3dc1ab576ba87a9b21326872d469a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 15:03:25.059842  501835 start.go:364] duration metric: took 34.446µs to acquireMachinesLock for "default-k8s-diff-port-124330"
	I1121 15:03:25.059861  501835 start.go:96] Skipping create...Using existing machine configuration
	I1121 15:03:25.059866  501835 fix.go:54] fixHost starting: 
	I1121 15:03:25.060125  501835 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:03:25.110068  501835 fix.go:112] recreateIfNeeded on default-k8s-diff-port-124330: state=Stopped err=<nil>
	W1121 15:03:25.110100  501835 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 15:03:23.845153  499267 out.go:252]   - Generating certificates and keys ...
	I1121 15:03:23.845326  499267 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 15:03:23.845430  499267 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 15:03:24.654905  499267 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 15:03:25.098384  499267 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 15:03:25.787401  499267 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 15:03:26.395282  499267 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 15:03:27.033592  499267 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 15:03:27.033953  499267 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-609503 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 15:03:27.389067  499267 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 15:03:27.389429  499267 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-609503 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 15:03:25.113470  501835 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-124330" ...
	I1121 15:03:25.113571  501835 cli_runner.go:164] Run: docker start default-k8s-diff-port-124330
	I1121 15:03:25.416543  501835 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:03:25.448525  501835 kic.go:430] container "default-k8s-diff-port-124330" state is running.
	I1121 15:03:25.448914  501835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124330
	I1121 15:03:25.471453  501835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/config.json ...
	I1121 15:03:25.471678  501835 machine.go:94] provisionDockerMachine start ...
	I1121 15:03:25.471735  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:25.492868  501835 main.go:143] libmachine: Using SSH client type: native
	I1121 15:03:25.493189  501835 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1121 15:03:25.493198  501835 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 15:03:25.495564  501835 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 15:03:28.653889  501835 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124330
	
	I1121 15:03:28.653918  501835 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-124330"
	I1121 15:03:28.653988  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:28.683972  501835 main.go:143] libmachine: Using SSH client type: native
	I1121 15:03:28.684324  501835 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1121 15:03:28.684338  501835 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-124330 && echo "default-k8s-diff-port-124330" | sudo tee /etc/hostname
	I1121 15:03:28.855215  501835 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124330
	
	I1121 15:03:28.855288  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:28.878927  501835 main.go:143] libmachine: Using SSH client type: native
	I1121 15:03:28.879232  501835 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1121 15:03:28.879255  501835 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-124330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-124330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-124330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 15:03:29.037790  501835 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 15:03:29.037817  501835 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 15:03:29.037858  501835 ubuntu.go:190] setting up certificates
	I1121 15:03:29.037869  501835 provision.go:84] configureAuth start
	I1121 15:03:29.037934  501835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124330
	I1121 15:03:29.058498  501835 provision.go:143] copyHostCerts
	I1121 15:03:29.058566  501835 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 15:03:29.058588  501835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 15:03:29.058665  501835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 15:03:29.058772  501835 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 15:03:29.058783  501835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 15:03:29.058816  501835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 15:03:29.058880  501835 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 15:03:29.058890  501835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 15:03:29.058915  501835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 15:03:29.058980  501835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-124330 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-124330 localhost minikube]
	I1121 15:03:30.067263  501835 provision.go:177] copyRemoteCerts
	I1121 15:03:30.067356  501835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 15:03:30.067425  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:30.088803  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:30.195609  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 15:03:30.234692  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1121 15:03:30.267441  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 15:03:30.288545  501835 provision.go:87] duration metric: took 1.250648382s to configureAuth
	I1121 15:03:30.288617  501835 ubuntu.go:206] setting minikube options for container-runtime
	I1121 15:03:30.288838  501835 config.go:182] Loaded profile config "default-k8s-diff-port-124330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:03:30.288991  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:30.309172  501835 main.go:143] libmachine: Using SSH client type: native
	I1121 15:03:30.309483  501835 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1121 15:03:30.309497  501835 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 15:03:30.708355  501835 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 15:03:30.708459  501835 machine.go:97] duration metric: took 5.236770477s to provisionDockerMachine
	I1121 15:03:30.708485  501835 start.go:293] postStartSetup for "default-k8s-diff-port-124330" (driver="docker")
	I1121 15:03:30.708523  501835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 15:03:30.708603  501835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 15:03:30.708695  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:30.738192  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:30.861597  501835 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 15:03:30.867302  501835 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 15:03:30.867387  501835 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 15:03:30.867414  501835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 15:03:30.867521  501835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 15:03:30.867697  501835 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 15:03:30.867909  501835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 15:03:30.879998  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:03:30.908855  501835 start.go:296] duration metric: took 200.317069ms for postStartSetup
	I1121 15:03:30.909053  501835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 15:03:30.909135  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:30.935479  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:31.050573  501835 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 15:03:31.056664  501835 fix.go:56] duration metric: took 5.996788768s for fixHost
	I1121 15:03:31.056741  501835 start.go:83] releasing machines lock for "default-k8s-diff-port-124330", held for 5.996890299s
	I1121 15:03:31.056849  501835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124330
	I1121 15:03:31.080267  501835 ssh_runner.go:195] Run: cat /version.json
	I1121 15:03:31.080333  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:31.080659  501835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 15:03:31.080723  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:31.120418  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:31.128658  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:31.326558  501835 ssh_runner.go:195] Run: systemctl --version
	I1121 15:03:31.333756  501835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 15:03:31.396835  501835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 15:03:31.401712  501835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 15:03:31.401783  501835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 15:03:31.410281  501835 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 15:03:31.410305  501835 start.go:496] detecting cgroup driver to use...
	I1121 15:03:31.410337  501835 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 15:03:31.410388  501835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 15:03:31.426746  501835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 15:03:31.441549  501835 docker.go:218] disabling cri-docker service (if available) ...
	I1121 15:03:31.441610  501835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 15:03:31.458256  501835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 15:03:31.472954  501835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 15:03:31.637607  501835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 15:03:31.780332  501835 docker.go:234] disabling docker service ...
	I1121 15:03:31.780476  501835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 15:03:31.799478  501835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 15:03:31.813879  501835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 15:03:31.989719  501835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 15:03:32.177126  501835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 15:03:32.195635  501835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 15:03:32.213150  501835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 15:03:32.213218  501835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.229048  501835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 15:03:32.229210  501835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.244353  501835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.257406  501835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.265953  501835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 15:03:32.273842  501835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.282818  501835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.293905  501835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.311242  501835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 15:03:32.322124  501835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 15:03:32.333134  501835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:03:32.490600  501835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 15:03:32.705088  501835 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 15:03:32.705162  501835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 15:03:32.720952  501835 start.go:564] Will wait 60s for crictl version
	I1121 15:03:32.721033  501835 ssh_runner.go:195] Run: which crictl
	I1121 15:03:32.725021  501835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 15:03:32.751639  501835 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 15:03:32.751721  501835 ssh_runner.go:195] Run: crio --version
	I1121 15:03:32.794823  501835 ssh_runner.go:195] Run: crio --version
	I1121 15:03:32.829127  501835 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 15:03:28.311429  499267 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 15:03:28.914535  499267 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 15:03:29.504795  499267 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 15:03:29.505358  499267 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 15:03:29.915689  499267 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 15:03:32.048832  499267 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 15:03:32.310399  499267 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 15:03:33.256910  499267 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 15:03:33.904183  499267 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 15:03:33.904284  499267 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 15:03:33.908709  499267 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 15:03:32.831887  501835 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-124330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 15:03:32.870294  501835 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 15:03:32.876532  501835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:03:32.893333  501835 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-124330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 15:03:32.893453  501835 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:03:32.893512  501835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:03:32.938007  501835 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:03:32.938028  501835 crio.go:433] Images already preloaded, skipping extraction
	I1121 15:03:32.938083  501835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:03:32.966810  501835 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:03:32.966885  501835 cache_images.go:86] Images are preloaded, skipping loading
	I1121 15:03:32.966907  501835 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1121 15:03:32.967056  501835 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-124330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 15:03:32.967185  501835 ssh_runner.go:195] Run: crio config
	I1121 15:03:33.100249  501835 cni.go:84] Creating CNI manager for ""
	I1121 15:03:33.100324  501835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:03:33.100353  501835 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 15:03:33.100415  501835 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-124330 NodeName:default-k8s-diff-port-124330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 15:03:33.100607  501835 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-124330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 15:03:33.100728  501835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 15:03:33.110391  501835 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 15:03:33.110516  501835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 15:03:33.118664  501835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1121 15:03:33.137398  501835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 15:03:33.151740  501835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1121 15:03:33.168144  501835 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 15:03:33.172100  501835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:03:33.182609  501835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:03:33.380420  501835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:03:33.407711  501835 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330 for IP: 192.168.85.2
	I1121 15:03:33.407738  501835 certs.go:195] generating shared ca certs ...
	I1121 15:03:33.407754  501835 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:33.407888  501835 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 15:03:33.407929  501835 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 15:03:33.407951  501835 certs.go:257] generating profile certs ...
	I1121 15:03:33.408036  501835 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.key
	I1121 15:03:33.408105  501835 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/apiserver.key.00e0670e
	I1121 15:03:33.408148  501835 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/proxy-client.key
	I1121 15:03:33.408272  501835 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 15:03:33.408310  501835 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 15:03:33.408324  501835 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 15:03:33.408349  501835 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 15:03:33.408376  501835 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 15:03:33.408434  501835 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 15:03:33.408480  501835 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:03:33.409046  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 15:03:33.428322  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 15:03:33.447292  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 15:03:33.468722  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 15:03:33.506711  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1121 15:03:33.540284  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 15:03:33.587981  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 15:03:33.640245  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 15:03:33.709663  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 15:03:33.753057  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 15:03:33.778780  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 15:03:33.799551  501835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 15:03:33.816770  501835 ssh_runner.go:195] Run: openssl version
	I1121 15:03:33.824365  501835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 15:03:33.841811  501835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 15:03:33.847061  501835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 15:03:33.847181  501835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 15:03:33.889416  501835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 15:03:33.898251  501835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 15:03:33.908251  501835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 15:03:33.913399  501835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 15:03:33.913780  501835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 15:03:33.961573  501835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 15:03:33.970487  501835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 15:03:33.981835  501835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:03:33.992051  501835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:03:33.992173  501835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:03:34.048606  501835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 15:03:34.058059  501835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 15:03:34.062396  501835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 15:03:34.115239  501835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 15:03:34.192267  501835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 15:03:34.297877  501835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 15:03:34.390990  501835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 15:03:34.613618  501835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 15:03:34.695657  501835 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-124330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:03:34.695755  501835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 15:03:34.695820  501835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 15:03:34.795540  501835 cri.go:89] found id: "ee9ac53aba59fc1e496aea56983c3d0c392cff161ea0a9c80336aaf6a3bb18d1"
	I1121 15:03:34.795562  501835 cri.go:89] found id: "f3450da3d6505714a2ddbd0849055e0c303889ab8fbf96ab66e5fb100167b3d0"
	I1121 15:03:34.795568  501835 cri.go:89] found id: "c627117ae55976c3bd9490f6441736eebc7000d2c50a16ac0fbd1824c9604beb"
	I1121 15:03:34.795571  501835 cri.go:89] found id: "8812c413c9de68d93c0162764f45b3d55f29007bce2646ce2fb79c02a7766a43"
	I1121 15:03:34.795577  501835 cri.go:89] found id: ""
	I1121 15:03:34.795626  501835 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 15:03:34.827356  501835 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:03:34Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:03:34.827438  501835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 15:03:34.847691  501835 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 15:03:34.847711  501835 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 15:03:34.847766  501835 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 15:03:34.865363  501835 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 15:03:34.865768  501835 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-124330" does not appear in /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:03:34.865874  501835 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-289204/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-124330" cluster setting kubeconfig missing "default-k8s-diff-port-124330" context setting]
	I1121 15:03:34.866146  501835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:34.867409  501835 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 15:03:34.886380  501835 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 15:03:34.886414  501835 kubeadm.go:602] duration metric: took 38.695677ms to restartPrimaryControlPlane
	I1121 15:03:34.886423  501835 kubeadm.go:403] duration metric: took 190.775682ms to StartCluster
	I1121 15:03:34.886438  501835 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:34.886497  501835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:03:34.887088  501835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:34.887288  501835 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:03:34.887552  501835 config.go:182] Loaded profile config "default-k8s-diff-port-124330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:03:34.887600  501835 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 15:03:34.887667  501835 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-124330"
	I1121 15:03:34.887684  501835 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-124330"
	W1121 15:03:34.887695  501835 addons.go:248] addon storage-provisioner should already be in state true
	I1121 15:03:34.887715  501835 host.go:66] Checking if "default-k8s-diff-port-124330" exists ...
	I1121 15:03:34.887744  501835 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-124330"
	I1121 15:03:34.887768  501835 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-124330"
	W1121 15:03:34.887774  501835 addons.go:248] addon dashboard should already be in state true
	I1121 15:03:34.887797  501835 host.go:66] Checking if "default-k8s-diff-port-124330" exists ...
	I1121 15:03:34.888144  501835 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:03:34.888236  501835 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:03:34.890740  501835 out.go:179] * Verifying Kubernetes components...
	I1121 15:03:34.890954  501835 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-124330"
	I1121 15:03:34.890978  501835 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-124330"
	I1121 15:03:34.891308  501835 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:03:34.894952  501835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:03:34.938036  501835 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 15:03:34.941032  501835 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:03:34.941054  501835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 15:03:34.941117  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:34.946096  501835 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-124330"
	W1121 15:03:34.946113  501835 addons.go:248] addon default-storageclass should already be in state true
	I1121 15:03:34.946139  501835 host.go:66] Checking if "default-k8s-diff-port-124330" exists ...
	I1121 15:03:34.946563  501835 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:03:34.998339  501835 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 15:03:35.001255  501835 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1121 15:03:33.913427  499267 out.go:252]   - Booting up control plane ...
	I1121 15:03:33.913532  499267 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 15:03:33.913612  499267 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 15:03:33.914836  499267 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 15:03:33.933345  499267 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 15:03:33.933461  499267 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 15:03:33.944830  499267 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 15:03:33.944934  499267 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 15:03:33.944976  499267 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 15:03:34.103938  499267 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 15:03:34.104068  499267 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 15:03:36.105401  499267 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001786483s
	I1121 15:03:36.109196  499267 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 15:03:36.109300  499267 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1121 15:03:36.109866  499267 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 15:03:36.109965  499267 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 15:03:35.001251  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:35.004475  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 15:03:35.004503  501835 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 15:03:35.004579  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:35.018079  501835 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 15:03:35.018103  501835 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 15:03:35.018173  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:35.043393  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:35.060483  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:35.417414  501835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:03:35.434158  501835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:03:35.465762  501835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 15:03:35.499544  501835 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-124330" to be "Ready" ...
	I1121 15:03:35.509985  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 15:03:35.510006  501835 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 15:03:35.621968  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 15:03:35.621996  501835 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 15:03:35.757447  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 15:03:35.757467  501835 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 15:03:35.882601  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 15:03:35.882623  501835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 15:03:35.961540  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 15:03:35.961605  501835 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 15:03:35.987901  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 15:03:35.987968  501835 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 15:03:36.005803  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 15:03:36.005880  501835 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 15:03:36.033885  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 15:03:36.033951  501835 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 15:03:36.060067  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 15:03:36.060133  501835 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 15:03:36.088109  501835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 15:03:41.040941  499267 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.931244834s
	I1121 15:03:41.758765  501835 node_ready.go:49] node "default-k8s-diff-port-124330" is "Ready"
	I1121 15:03:41.758804  501835 node_ready.go:38] duration metric: took 6.259160577s for node "default-k8s-diff-port-124330" to be "Ready" ...
	I1121 15:03:41.758819  501835 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:03:41.758885  501835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:03:45.482718  501835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.048475211s)
	I1121 15:03:45.482784  501835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.016958752s)
	I1121 15:03:45.840467  501835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.7522517s)
	I1121 15:03:45.840504  501835 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.081592s)
	I1121 15:03:45.840528  501835 api_server.go:72] duration metric: took 10.953212337s to wait for apiserver process to appear ...
	I1121 15:03:45.840534  501835 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:03:45.840628  501835 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1121 15:03:45.843347  501835 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-124330 addons enable metrics-server
	
	I1121 15:03:45.846288  501835 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1121 15:03:45.891741  499267 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.782536095s
	I1121 15:03:47.610937  499267 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.501408365s
	I1121 15:03:47.639387  499267 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 15:03:47.666140  499267 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 15:03:47.682764  499267 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 15:03:47.683231  499267 kubeadm.go:319] [mark-control-plane] Marking the node auto-609503 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 15:03:47.696780  499267 kubeadm.go:319] [bootstrap-token] Using token: 3hic3h.1sikr0fxhzk10e38
	I1121 15:03:47.699667  499267 out.go:252]   - Configuring RBAC rules ...
	I1121 15:03:47.699791  499267 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 15:03:47.707820  499267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 15:03:47.716290  499267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 15:03:47.721196  499267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 15:03:47.725845  499267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 15:03:47.730058  499267 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 15:03:48.018423  499267 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 15:03:48.469673  499267 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 15:03:49.033547  499267 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 15:03:49.035740  499267 kubeadm.go:319] 
	I1121 15:03:49.035825  499267 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 15:03:49.035832  499267 kubeadm.go:319] 
	I1121 15:03:49.035918  499267 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 15:03:49.035928  499267 kubeadm.go:319] 
	I1121 15:03:49.035954  499267 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 15:03:49.036177  499267 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 15:03:49.036241  499267 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 15:03:49.036247  499267 kubeadm.go:319] 
	I1121 15:03:49.036304  499267 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 15:03:49.036308  499267 kubeadm.go:319] 
	I1121 15:03:49.036375  499267 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 15:03:49.036403  499267 kubeadm.go:319] 
	I1121 15:03:49.036459  499267 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 15:03:49.036546  499267 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 15:03:49.036622  499267 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 15:03:49.036630  499267 kubeadm.go:319] 
	I1121 15:03:49.037000  499267 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 15:03:49.037129  499267 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 15:03:49.037136  499267 kubeadm.go:319] 
	I1121 15:03:49.038962  499267 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3hic3h.1sikr0fxhzk10e38 \
	I1121 15:03:49.039081  499267 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 \
	I1121 15:03:49.039284  499267 kubeadm.go:319] 	--control-plane 
	I1121 15:03:49.039295  499267 kubeadm.go:319] 
	I1121 15:03:49.039526  499267 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 15:03:49.039536  499267 kubeadm.go:319] 
	I1121 15:03:49.039807  499267 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3hic3h.1sikr0fxhzk10e38 \
	I1121 15:03:49.040091  499267 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 
	I1121 15:03:49.054813  499267 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 15:03:49.055050  499267 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 15:03:49.055160  499267 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 15:03:49.055175  499267 cni.go:84] Creating CNI manager for ""
	I1121 15:03:49.055182  499267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:03:49.058415  499267 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 15:03:45.849173  501835 addons.go:530] duration metric: took 10.961554415s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1121 15:03:45.863745  501835 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1121 15:03:45.873401  501835 api_server.go:141] control plane version: v1.34.1
	I1121 15:03:45.873429  501835 api_server.go:131] duration metric: took 32.815723ms to wait for apiserver health ...
	I1121 15:03:45.873438  501835 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:03:45.885593  501835 system_pods.go:59] 8 kube-system pods found
	I1121 15:03:45.885629  501835 system_pods.go:61] "coredns-66bc5c9577-zhrs7" [6d450543-7e6c-43d8-93ac-9ceca2afe29a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:03:45.885638  501835 system_pods.go:61] "etcd-default-k8s-diff-port-124330" [8e827f48-9cc4-469d-a51a-af4fcfbff43f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:03:45.885647  501835 system_pods.go:61] "kindnet-wdpnm" [8808169a-c3a4-4b7c-8703-356c5678bb6a] Running
	I1121 15:03:45.885652  501835 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-124330" [a9842c68-c43c-4c9c-bcc6-f9278c853ba1] Running
	I1121 15:03:45.885656  501835 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-124330" [c388eb67-dcdf-480e-bd3e-d2e7dda823c2] Running
	I1121 15:03:45.885661  501835 system_pods.go:61] "kube-proxy-fr5df" [968146ae-c634-4d71-88d9-dd180b847494] Running
	I1121 15:03:45.885667  501835 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-124330" [0b217514-f104-4cb6-88bf-36c746a3fff2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:03:45.885671  501835 system_pods.go:61] "storage-provisioner" [72853767-c110-4974-813d-a43eb4ea90a6] Running
	I1121 15:03:45.885686  501835 system_pods.go:74] duration metric: took 12.232831ms to wait for pod list to return data ...
	I1121 15:03:45.885694  501835 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:03:45.898790  501835 default_sa.go:45] found service account: "default"
	I1121 15:03:45.898821  501835 default_sa.go:55] duration metric: took 13.120702ms for default service account to be created ...
	I1121 15:03:45.898831  501835 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 15:03:45.906212  501835 system_pods.go:86] 8 kube-system pods found
	I1121 15:03:45.906259  501835 system_pods.go:89] "coredns-66bc5c9577-zhrs7" [6d450543-7e6c-43d8-93ac-9ceca2afe29a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:03:45.906269  501835 system_pods.go:89] "etcd-default-k8s-diff-port-124330" [8e827f48-9cc4-469d-a51a-af4fcfbff43f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:03:45.906275  501835 system_pods.go:89] "kindnet-wdpnm" [8808169a-c3a4-4b7c-8703-356c5678bb6a] Running
	I1121 15:03:45.906280  501835 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-124330" [a9842c68-c43c-4c9c-bcc6-f9278c853ba1] Running
	I1121 15:03:45.906285  501835 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-124330" [c388eb67-dcdf-480e-bd3e-d2e7dda823c2] Running
	I1121 15:03:45.906289  501835 system_pods.go:89] "kube-proxy-fr5df" [968146ae-c634-4d71-88d9-dd180b847494] Running
	I1121 15:03:45.906295  501835 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-124330" [0b217514-f104-4cb6-88bf-36c746a3fff2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:03:45.906300  501835 system_pods.go:89] "storage-provisioner" [72853767-c110-4974-813d-a43eb4ea90a6] Running
	I1121 15:03:45.906309  501835 system_pods.go:126] duration metric: took 7.471569ms to wait for k8s-apps to be running ...
	I1121 15:03:45.906322  501835 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 15:03:45.906379  501835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:03:45.929802  501835 system_svc.go:56] duration metric: took 23.470336ms WaitForService to wait for kubelet
	I1121 15:03:45.929833  501835 kubeadm.go:587] duration metric: took 11.042515893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:03:45.929851  501835 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:03:45.947000  501835 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:03:45.947037  501835 node_conditions.go:123] node cpu capacity is 2
	I1121 15:03:45.947050  501835 node_conditions.go:105] duration metric: took 17.19233ms to run NodePressure ...
	I1121 15:03:45.947063  501835 start.go:242] waiting for startup goroutines ...
	I1121 15:03:45.947106  501835 start.go:247] waiting for cluster config update ...
	I1121 15:03:45.947124  501835 start.go:256] writing updated cluster config ...
	I1121 15:03:45.947466  501835 ssh_runner.go:195] Run: rm -f paused
	I1121 15:03:45.951661  501835 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:03:45.956788  501835 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zhrs7" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 15:03:47.963037  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	I1121 15:03:49.061477  499267 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 15:03:49.070478  499267 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 15:03:49.070549  499267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 15:03:49.115582  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 15:03:49.704040  499267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 15:03:49.704171  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:49.704251  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-609503 minikube.k8s.io/updated_at=2025_11_21T15_03_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=auto-609503 minikube.k8s.io/primary=true
	I1121 15:03:50.122064  499267 ops.go:34] apiserver oom_adj: -16
	I1121 15:03:50.122168  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:50.622542  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:51.122275  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:51.623262  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:52.122212  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:52.623088  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:53.122758  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:53.622684  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:53.743035  499267 kubeadm.go:1114] duration metric: took 4.038910572s to wait for elevateKubeSystemPrivileges
	I1121 15:03:53.743140  499267 kubeadm.go:403] duration metric: took 30.156788323s to StartCluster
	I1121 15:03:53.743162  499267 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:53.743272  499267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:03:53.744274  499267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:53.744586  499267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 15:03:53.744585  499267 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:03:53.744872  499267 config.go:182] Loaded profile config "auto-609503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:03:53.744911  499267 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 15:03:53.744973  499267 addons.go:70] Setting storage-provisioner=true in profile "auto-609503"
	I1121 15:03:53.744986  499267 addons.go:239] Setting addon storage-provisioner=true in "auto-609503"
	I1121 15:03:53.745026  499267 host.go:66] Checking if "auto-609503" exists ...
	I1121 15:03:53.745484  499267 cli_runner.go:164] Run: docker container inspect auto-609503 --format={{.State.Status}}
	I1121 15:03:53.745998  499267 addons.go:70] Setting default-storageclass=true in profile "auto-609503"
	I1121 15:03:53.746016  499267 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-609503"
	I1121 15:03:53.746288  499267 cli_runner.go:164] Run: docker container inspect auto-609503 --format={{.State.Status}}
	I1121 15:03:53.748609  499267 out.go:179] * Verifying Kubernetes components...
	I1121 15:03:53.751840  499267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:03:53.798897  499267 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1121 15:03:49.964155  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:03:52.464929  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:03:54.465954  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	I1121 15:03:53.802786  499267 addons.go:239] Setting addon default-storageclass=true in "auto-609503"
	I1121 15:03:53.802828  499267 host.go:66] Checking if "auto-609503" exists ...
	I1121 15:03:53.803233  499267 cli_runner.go:164] Run: docker container inspect auto-609503 --format={{.State.Status}}
	I1121 15:03:53.803571  499267 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:03:53.803585  499267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 15:03:53.803633  499267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-609503
	I1121 15:03:53.848001  499267 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 15:03:53.848033  499267 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 15:03:53.848109  499267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-609503
	I1121 15:03:53.860236  499267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/auto-609503/id_rsa Username:docker}
	I1121 15:03:53.884800  499267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/auto-609503/id_rsa Username:docker}
	I1121 15:03:54.312445  499267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:03:54.508883  499267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 15:03:54.565891  499267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 15:03:54.566079  499267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:03:55.801325  499267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.292353349s)
	I1121 15:03:55.801428  499267 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.235317302s)
	I1121 15:03:55.801445  499267 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.235487757s)
	I1121 15:03:55.802863  499267 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1121 15:03:55.802910  499267 node_ready.go:35] waiting up to 15m0s for node "auto-609503" to be "Ready" ...
	I1121 15:03:55.801479  499267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.488933894s)
	I1121 15:03:55.886123  499267 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 15:03:55.889082  499267 addons.go:530] duration metric: took 2.144145193s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 15:03:56.307479  499267 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-609503" context rescaled to 1 replicas
	W1121 15:03:57.806337  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:03:56.962481  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:03:58.963149  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:03:59.806492  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:01.806798  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:00.963328  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:02.972976  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:04.307513  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:06.806410  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:05.462350  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:07.462559  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:08.806689  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:11.306403  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:09.963163  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:11.964992  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:14.463134  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:13.306584  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:15.806323  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:16.965548  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:19.462399  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:18.306580  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:20.806162  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:21.462682  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:23.962660  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:25.962759  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	I1121 15:04:26.462590  501835 pod_ready.go:94] pod "coredns-66bc5c9577-zhrs7" is "Ready"
	I1121 15:04:26.462621  501835 pod_ready.go:86] duration metric: took 40.505803075s for pod "coredns-66bc5c9577-zhrs7" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.465456  501835 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.469910  501835 pod_ready.go:94] pod "etcd-default-k8s-diff-port-124330" is "Ready"
	I1121 15:04:26.469942  501835 pod_ready.go:86] duration metric: took 4.458144ms for pod "etcd-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.472604  501835 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.477324  501835 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-124330" is "Ready"
	I1121 15:04:26.477354  501835 pod_ready.go:86] duration metric: took 4.72419ms for pod "kube-apiserver-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.479989  501835 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.660283  501835 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-124330" is "Ready"
	I1121 15:04:26.660309  501835 pod_ready.go:86] duration metric: took 180.293131ms for pod "kube-controller-manager-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.860669  501835 pod_ready.go:83] waiting for pod "kube-proxy-fr5df" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:27.259919  501835 pod_ready.go:94] pod "kube-proxy-fr5df" is "Ready"
	I1121 15:04:27.259952  501835 pod_ready.go:86] duration metric: took 399.256376ms for pod "kube-proxy-fr5df" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:27.460148  501835 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:27.860506  501835 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-124330" is "Ready"
	I1121 15:04:27.860534  501835 pod_ready.go:86] duration metric: took 400.3565ms for pod "kube-scheduler-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:27.860545  501835 pod_ready.go:40] duration metric: took 41.908852364s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:04:27.912609  501835 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 15:04:27.917936  501835 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-124330" cluster and "default" namespace by default
	W1121 15:04:23.306373  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:25.306832  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:27.806279  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:30.306792  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:32.805939  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	I1121 15:04:34.306321  499267 node_ready.go:49] node "auto-609503" is "Ready"
	I1121 15:04:34.306355  499267 node_ready.go:38] duration metric: took 38.503410571s for node "auto-609503" to be "Ready" ...
	I1121 15:04:34.306375  499267 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:04:34.306439  499267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:04:34.325191  499267 api_server.go:72] duration metric: took 40.580576364s to wait for apiserver process to appear ...
	I1121 15:04:34.325218  499267 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:04:34.325239  499267 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:04:34.334733  499267 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 15:04:34.336305  499267 api_server.go:141] control plane version: v1.34.1
	I1121 15:04:34.336329  499267 api_server.go:131] duration metric: took 11.102963ms to wait for apiserver health ...
	I1121 15:04:34.336338  499267 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:04:34.341711  499267 system_pods.go:59] 8 kube-system pods found
	I1121 15:04:34.341748  499267 system_pods.go:61] "coredns-66bc5c9577-t8cmt" [cfe3d599-f8e6-439d-ba53-ed8c41d0ec68] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:04:34.341755  499267 system_pods.go:61] "etcd-auto-609503" [1643ef90-f284-407a-ba53-40b3cb0ce799] Running
	I1121 15:04:34.341760  499267 system_pods.go:61] "kindnet-kwthc" [ee54de0c-419d-4b2c-ad50-c1645cdff6b2] Running
	I1121 15:04:34.341765  499267 system_pods.go:61] "kube-apiserver-auto-609503" [9fcd9ac6-1199-4a30-934a-c299263f0683] Running
	I1121 15:04:34.341769  499267 system_pods.go:61] "kube-controller-manager-auto-609503" [bbc7db4e-9280-4bea-9ef2-0da7a644cc7b] Running
	I1121 15:04:34.341773  499267 system_pods.go:61] "kube-proxy-7wgzz" [340a0657-b877-4de1-aaa5-65e4aa99fd68] Running
	I1121 15:04:34.341777  499267 system_pods.go:61] "kube-scheduler-auto-609503" [6a3d7cb9-adf5-40a8-8663-03681e985f47] Running
	I1121 15:04:34.341783  499267 system_pods.go:61] "storage-provisioner" [52dff593-b9aa-4dd0-856b-03bcc6136a13] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:04:34.341789  499267 system_pods.go:74] duration metric: took 5.444888ms to wait for pod list to return data ...
	I1121 15:04:34.341797  499267 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:04:34.351651  499267 default_sa.go:45] found service account: "default"
	I1121 15:04:34.351678  499267 default_sa.go:55] duration metric: took 9.874017ms for default service account to be created ...
	I1121 15:04:34.351689  499267 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 15:04:34.358211  499267 system_pods.go:86] 8 kube-system pods found
	I1121 15:04:34.358243  499267 system_pods.go:89] "coredns-66bc5c9577-t8cmt" [cfe3d599-f8e6-439d-ba53-ed8c41d0ec68] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:04:34.358250  499267 system_pods.go:89] "etcd-auto-609503" [1643ef90-f284-407a-ba53-40b3cb0ce799] Running
	I1121 15:04:34.358256  499267 system_pods.go:89] "kindnet-kwthc" [ee54de0c-419d-4b2c-ad50-c1645cdff6b2] Running
	I1121 15:04:34.358260  499267 system_pods.go:89] "kube-apiserver-auto-609503" [9fcd9ac6-1199-4a30-934a-c299263f0683] Running
	I1121 15:04:34.358264  499267 system_pods.go:89] "kube-controller-manager-auto-609503" [bbc7db4e-9280-4bea-9ef2-0da7a644cc7b] Running
	I1121 15:04:34.358268  499267 system_pods.go:89] "kube-proxy-7wgzz" [340a0657-b877-4de1-aaa5-65e4aa99fd68] Running
	I1121 15:04:34.358272  499267 system_pods.go:89] "kube-scheduler-auto-609503" [6a3d7cb9-adf5-40a8-8663-03681e985f47] Running
	I1121 15:04:34.358277  499267 system_pods.go:89] "storage-provisioner" [52dff593-b9aa-4dd0-856b-03bcc6136a13] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:04:34.358305  499267 retry.go:31] will retry after 259.035465ms: missing components: kube-dns
	I1121 15:04:34.626392  499267 system_pods.go:86] 8 kube-system pods found
	I1121 15:04:34.626429  499267 system_pods.go:89] "coredns-66bc5c9577-t8cmt" [cfe3d599-f8e6-439d-ba53-ed8c41d0ec68] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:04:34.626437  499267 system_pods.go:89] "etcd-auto-609503" [1643ef90-f284-407a-ba53-40b3cb0ce799] Running
	I1121 15:04:34.626443  499267 system_pods.go:89] "kindnet-kwthc" [ee54de0c-419d-4b2c-ad50-c1645cdff6b2] Running
	I1121 15:04:34.626448  499267 system_pods.go:89] "kube-apiserver-auto-609503" [9fcd9ac6-1199-4a30-934a-c299263f0683] Running
	I1121 15:04:34.626452  499267 system_pods.go:89] "kube-controller-manager-auto-609503" [bbc7db4e-9280-4bea-9ef2-0da7a644cc7b] Running
	I1121 15:04:34.626457  499267 system_pods.go:89] "kube-proxy-7wgzz" [340a0657-b877-4de1-aaa5-65e4aa99fd68] Running
	I1121 15:04:34.626461  499267 system_pods.go:89] "kube-scheduler-auto-609503" [6a3d7cb9-adf5-40a8-8663-03681e985f47] Running
	I1121 15:04:34.626467  499267 system_pods.go:89] "storage-provisioner" [52dff593-b9aa-4dd0-856b-03bcc6136a13] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:04:34.626484  499267 retry.go:31] will retry after 253.677534ms: missing components: kube-dns
	I1121 15:04:34.884608  499267 system_pods.go:86] 8 kube-system pods found
	I1121 15:04:34.884644  499267 system_pods.go:89] "coredns-66bc5c9577-t8cmt" [cfe3d599-f8e6-439d-ba53-ed8c41d0ec68] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:04:34.884651  499267 system_pods.go:89] "etcd-auto-609503" [1643ef90-f284-407a-ba53-40b3cb0ce799] Running
	I1121 15:04:34.884658  499267 system_pods.go:89] "kindnet-kwthc" [ee54de0c-419d-4b2c-ad50-c1645cdff6b2] Running
	I1121 15:04:34.884662  499267 system_pods.go:89] "kube-apiserver-auto-609503" [9fcd9ac6-1199-4a30-934a-c299263f0683] Running
	I1121 15:04:34.884666  499267 system_pods.go:89] "kube-controller-manager-auto-609503" [bbc7db4e-9280-4bea-9ef2-0da7a644cc7b] Running
	I1121 15:04:34.884671  499267 system_pods.go:89] "kube-proxy-7wgzz" [340a0657-b877-4de1-aaa5-65e4aa99fd68] Running
	I1121 15:04:34.884674  499267 system_pods.go:89] "kube-scheduler-auto-609503" [6a3d7cb9-adf5-40a8-8663-03681e985f47] Running
	I1121 15:04:34.884680  499267 system_pods.go:89] "storage-provisioner" [52dff593-b9aa-4dd0-856b-03bcc6136a13] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:04:34.884695  499267 retry.go:31] will retry after 337.98202ms: missing components: kube-dns
	I1121 15:04:35.227360  499267 system_pods.go:86] 8 kube-system pods found
	I1121 15:04:35.227391  499267 system_pods.go:89] "coredns-66bc5c9577-t8cmt" [cfe3d599-f8e6-439d-ba53-ed8c41d0ec68] Running
	I1121 15:04:35.227399  499267 system_pods.go:89] "etcd-auto-609503" [1643ef90-f284-407a-ba53-40b3cb0ce799] Running
	I1121 15:04:35.227403  499267 system_pods.go:89] "kindnet-kwthc" [ee54de0c-419d-4b2c-ad50-c1645cdff6b2] Running
	I1121 15:04:35.227407  499267 system_pods.go:89] "kube-apiserver-auto-609503" [9fcd9ac6-1199-4a30-934a-c299263f0683] Running
	I1121 15:04:35.227411  499267 system_pods.go:89] "kube-controller-manager-auto-609503" [bbc7db4e-9280-4bea-9ef2-0da7a644cc7b] Running
	I1121 15:04:35.227415  499267 system_pods.go:89] "kube-proxy-7wgzz" [340a0657-b877-4de1-aaa5-65e4aa99fd68] Running
	I1121 15:04:35.227428  499267 system_pods.go:89] "kube-scheduler-auto-609503" [6a3d7cb9-adf5-40a8-8663-03681e985f47] Running
	I1121 15:04:35.227436  499267 system_pods.go:89] "storage-provisioner" [52dff593-b9aa-4dd0-856b-03bcc6136a13] Running
	I1121 15:04:35.227444  499267 system_pods.go:126] duration metric: took 875.749019ms to wait for k8s-apps to be running ...
	I1121 15:04:35.227458  499267 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 15:04:35.227530  499267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:04:35.241653  499267 system_svc.go:56] duration metric: took 14.18468ms WaitForService to wait for kubelet
	I1121 15:04:35.241682  499267 kubeadm.go:587] duration metric: took 41.497072042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:04:35.241706  499267 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:04:35.247564  499267 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:04:35.247596  499267 node_conditions.go:123] node cpu capacity is 2
	I1121 15:04:35.247609  499267 node_conditions.go:105] duration metric: took 5.896996ms to run NodePressure ...
	I1121 15:04:35.247622  499267 start.go:242] waiting for startup goroutines ...
	I1121 15:04:35.247629  499267 start.go:247] waiting for cluster config update ...
	I1121 15:04:35.247640  499267 start.go:256] writing updated cluster config ...
	I1121 15:04:35.247936  499267 ssh_runner.go:195] Run: rm -f paused
	I1121 15:04:35.253279  499267 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:04:35.257342  499267 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t8cmt" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.262515  499267 pod_ready.go:94] pod "coredns-66bc5c9577-t8cmt" is "Ready"
	I1121 15:04:35.262559  499267 pod_ready.go:86] duration metric: took 5.18685ms for pod "coredns-66bc5c9577-t8cmt" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.265789  499267 pod_ready.go:83] waiting for pod "etcd-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.271173  499267 pod_ready.go:94] pod "etcd-auto-609503" is "Ready"
	I1121 15:04:35.271207  499267 pod_ready.go:86] duration metric: took 5.391758ms for pod "etcd-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.273705  499267 pod_ready.go:83] waiting for pod "kube-apiserver-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.282211  499267 pod_ready.go:94] pod "kube-apiserver-auto-609503" is "Ready"
	I1121 15:04:35.282282  499267 pod_ready.go:86] duration metric: took 8.549144ms for pod "kube-apiserver-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.285052  499267 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.657420  499267 pod_ready.go:94] pod "kube-controller-manager-auto-609503" is "Ready"
	I1121 15:04:35.657450  499267 pod_ready.go:86] duration metric: took 372.370879ms for pod "kube-controller-manager-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.857853  499267 pod_ready.go:83] waiting for pod "kube-proxy-7wgzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:36.257605  499267 pod_ready.go:94] pod "kube-proxy-7wgzz" is "Ready"
	I1121 15:04:36.257689  499267 pod_ready.go:86] duration metric: took 399.809504ms for pod "kube-proxy-7wgzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:36.458314  499267 pod_ready.go:83] waiting for pod "kube-scheduler-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:36.857596  499267 pod_ready.go:94] pod "kube-scheduler-auto-609503" is "Ready"
	I1121 15:04:36.857621  499267 pod_ready.go:86] duration metric: took 399.218613ms for pod "kube-scheduler-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:36.857634  499267 pod_ready.go:40] duration metric: took 1.604279707s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:04:36.921467  499267 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 15:04:36.924644  499267 out.go:179] * Done! kubectl is now configured to use "auto-609503" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.283823324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.283995478Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/11b70626da0821f486d59c974314c4558b518975f50376f3c7636fb7a9730b48/merged/etc/passwd: no such file or directory"
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.284015662Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/11b70626da0821f486d59c974314c4558b518975f50376f3c7636fb7a9730b48/merged/etc/group: no such file or directory"
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.284249232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.306121911Z" level=info msg="Created container 7b937ecab0a241d292f6754bcbd211657f52de9ee8071744759c18c71945d0db: kube-system/storage-provisioner/storage-provisioner" id=9518084d-b665-44a7-90df-5bd29805dcba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.307065734Z" level=info msg="Starting container: 7b937ecab0a241d292f6754bcbd211657f52de9ee8071744759c18c71945d0db" id=6d74a156-cf2a-49b2-a30c-d2afeda038a9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.309302609Z" level=info msg="Started container" PID=1650 containerID=7b937ecab0a241d292f6754bcbd211657f52de9ee8071744759c18c71945d0db description=kube-system/storage-provisioner/storage-provisioner id=6d74a156-cf2a-49b2-a30c-d2afeda038a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8aaa62b4ab20edb90bbff8b67d6117b1cc9917bd6dc99bd665902437a3bd8790
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.430861431Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.4346358Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.4346732Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.434696503Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.439483101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.439516932Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.439540177Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.442901766Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.442939281Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.442962108Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.445909719Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.445942967Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.445963595Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.449616839Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.449650702Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.449674136Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.453323688Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.453357058Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	7b937ecab0a24       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   8aaa62b4ab20e       storage-provisioner                                    kube-system
	0df3c2682375b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   7719e009831c0       dashboard-metrics-scraper-6ffb444bf9-kzfq7             kubernetes-dashboard
	0328bfae2ad66       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   e0814e61a4311       kubernetes-dashboard-855c9754f9-8j9t6                  kubernetes-dashboard
	1b103ed62372e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   7e073d75b5257       busybox                                                default
	5b113bd590fba       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           59 seconds ago       Running             coredns                     1                   437f63aec44b1       coredns-66bc5c9577-zhrs7                               kube-system
	c883b96946c23       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   e79f03ba22fa3       kube-proxy-fr5df                                       kube-system
	5f0e5e46cc630       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   8aaa62b4ab20e       storage-provisioner                                    kube-system
	8542c4d9705bc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   c84229870d8b5       kindnet-wdpnm                                          kube-system
	ee9ac53aba59f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   a2d7008d1f4aa       kube-controller-manager-default-k8s-diff-port-124330   kube-system
	f3450da3d6505       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   813c975080bb7       kube-apiserver-default-k8s-diff-port-124330            kube-system
	c627117ae5597       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   ddc088e739d2d       kube-scheduler-default-k8s-diff-port-124330            kube-system
	8812c413c9de6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   7d0f9169385a0       etcd-default-k8s-diff-port-124330                      kube-system
	
	
	==> coredns [5b113bd590fba8ccf5296efcebfb784b1cf6e565590f72d8fad43cb29967ffdc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55343 - 52381 "HINFO IN 3673569578872881866.5877170825921050816. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019226274s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-124330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-124330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=default-k8s-diff-port-124330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T15_02_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 15:02:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-124330
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 15:04:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 15:04:23 +0000   Fri, 21 Nov 2025 15:02:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 15:04:23 +0000   Fri, 21 Nov 2025 15:02:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 15:04:23 +0000   Fri, 21 Nov 2025 15:02:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 15:04:23 +0000   Fri, 21 Nov 2025 15:02:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-124330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                6a639b89-86eb-4814-8ac4-d429830f770c
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-zhrs7                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m28s
	  kube-system                 etcd-default-k8s-diff-port-124330                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m33s
	  kube-system                 kindnet-wdpnm                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m28s
	  kube-system                 kube-apiserver-default-k8s-diff-port-124330             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-124330    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-proxy-fr5df                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-scheduler-default-k8s-diff-port-124330             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kzfq7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8j9t6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m26s                  kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m43s (x8 over 2m44s)  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m43s (x8 over 2m44s)  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m43s (x8 over 2m44s)  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m33s                  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m33s                  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m33s                  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m29s                  node-controller  Node default-k8s-diff-port-124330 event: Registered Node default-k8s-diff-port-124330 in Controller
	  Normal   NodeReady                107s                   kubelet          Node default-k8s-diff-port-124330 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                    node-controller  Node default-k8s-diff-port-124330 event: Registered Node default-k8s-diff-port-124330 in Controller
	
	
	==> dmesg <==
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	[Nov21 14:58] overlayfs: idmapped layers are currently not supported
	[Nov21 14:59] overlayfs: idmapped layers are currently not supported
	[Nov21 15:00] overlayfs: idmapped layers are currently not supported
	[ +13.392744] overlayfs: idmapped layers are currently not supported
	[Nov21 15:01] overlayfs: idmapped layers are currently not supported
	[Nov21 15:02] overlayfs: idmapped layers are currently not supported
	[ +25.555443] overlayfs: idmapped layers are currently not supported
	[Nov21 15:03] overlayfs: idmapped layers are currently not supported
	[  +2.173955] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8812c413c9de68d93c0162764f45b3d55f29007bce2646ce2fb79c02a7766a43] <==
	{"level":"warn","ts":"2025-11-21T15:03:38.985019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.030822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.079352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.110732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.177012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.199999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.241767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.271366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.301013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.346325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.399355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.411022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.453724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.479399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.541462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.563066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.588653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.614341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.651222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.683263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.726519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.755989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.796464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.817925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.996169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36200","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:04:43 up  2:47,  0 user,  load average: 3.09, 3.81, 3.06
	Linux default-k8s-diff-port-124330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8542c4d9705bcfbb9ccfc9cee884439ff94461901253f047e84e29acf9b7621e] <==
	I1121 15:03:44.143605       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 15:03:44.155208       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 15:03:44.155361       1 main.go:148] setting mtu 1500 for CNI 
	I1121 15:03:44.155374       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 15:03:44.155385       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T15:03:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 15:03:44.430487       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 15:03:44.430505       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 15:03:44.430513       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 15:03:44.430826       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 15:04:14.430400       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 15:04:14.430514       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 15:04:14.431301       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 15:04:14.431344       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1121 15:04:15.930936       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 15:04:15.930966       1 metrics.go:72] Registering metrics
	I1121 15:04:15.931039       1 controller.go:711] "Syncing nftables rules"
	I1121 15:04:24.430530       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 15:04:24.430576       1 main.go:301] handling current node
	I1121 15:04:34.429722       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 15:04:34.429757       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f3450da3d6505714a2ddbd0849055e0c303889ab8fbf96ab66e5fb100167b3d0] <==
	I1121 15:03:41.859254       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 15:03:41.859260       1 cache.go:39] Caches are synced for autoregister controller
	I1121 15:03:41.867866       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 15:03:41.901036       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1121 15:03:41.982548       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1121 15:03:41.982575       1 policy_source.go:240] refreshing policies
	I1121 15:03:41.986521       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1121 15:03:42.017368       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 15:03:42.017409       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 15:03:42.040177       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 15:03:42.060993       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 15:03:42.062343       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 15:03:42.133774       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1121 15:03:42.217608       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 15:03:42.435903       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 15:03:43.241148       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 15:03:44.571453       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 15:03:44.919784       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 15:03:45.249863       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 15:03:45.421537       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 15:03:45.784575       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.91.220"}
	I1121 15:03:45.833167       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.134.110"}
	I1121 15:03:46.890213       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 15:03:47.222934       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 15:03:47.382390       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ee9ac53aba59fc1e496aea56983c3d0c392cff161ea0a9c80336aaf6a3bb18d1] <==
	I1121 15:03:46.846229       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 15:03:46.846305       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-124330"
	I1121 15:03:46.846360       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1121 15:03:46.851549       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 15:03:46.851587       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 15:03:46.853388       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 15:03:46.855617       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 15:03:46.857844       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 15:03:46.858792       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 15:03:46.858841       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 15:03:46.861994       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 15:03:46.880964       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 15:03:46.884509       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 15:03:46.884812       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 15:03:46.886940       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 15:03:46.888001       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 15:03:46.891738       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 15:03:46.893185       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:03:46.897349       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 15:03:46.899047       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 15:03:46.899124       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 15:03:46.915365       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:03:46.953803       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:03:46.953837       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 15:03:46.953846       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c883b96946c233f38d0c8644abe895f3d621a5ad142233e77120f6b5eda51757] <==
	I1121 15:03:45.742562       1 server_linux.go:53] "Using iptables proxy"
	I1121 15:03:46.194295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 15:03:46.298243       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 15:03:46.307959       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 15:03:46.308048       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 15:03:46.655652       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 15:03:46.655776       1 server_linux.go:132] "Using iptables Proxier"
	I1121 15:03:46.929603       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 15:03:46.930026       1 server.go:527] "Version info" version="v1.34.1"
	I1121 15:03:46.930254       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:03:46.932169       1 config.go:200] "Starting service config controller"
	I1121 15:03:46.932246       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 15:03:46.932316       1 config.go:106] "Starting endpoint slice config controller"
	I1121 15:03:46.932344       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 15:03:46.932450       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 15:03:46.932483       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 15:03:46.933426       1 config.go:309] "Starting node config controller"
	I1121 15:03:46.933510       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 15:03:46.933564       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 15:03:47.033178       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 15:03:47.033221       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 15:03:47.033266       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c627117ae55976c3bd9490f6441736eebc7000d2c50a16ac0fbd1824c9604beb] <==
	I1121 15:03:40.640093       1 serving.go:386] Generated self-signed cert in-memory
	I1121 15:03:47.219599       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 15:03:47.219719       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:03:47.250148       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 15:03:47.250193       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 15:03:47.250350       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:03:47.250361       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:03:47.250377       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:03:47.250385       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:03:47.251355       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 15:03:47.251775       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 15:03:47.351845       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:03:47.351985       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 15:03:47.352144       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 15:03:47 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:47.533009     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj5vd\" (UniqueName: \"kubernetes.io/projected/e8eeec4c-0209-4d3a-bf07-5706e2abe27e-kube-api-access-zj5vd\") pod \"kubernetes-dashboard-855c9754f9-8j9t6\" (UID: \"e8eeec4c-0209-4d3a-bf07-5706e2abe27e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8j9t6"
	Nov 21 15:03:47 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:47.634023     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1f883e74-021c-4514-a1b6-0497912dadd7-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-kzfq7\" (UID: \"1f883e74-021c-4514-a1b6-0497912dadd7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7"
	Nov 21 15:03:47 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:47.634096     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7gfh\" (UniqueName: \"kubernetes.io/projected/1f883e74-021c-4514-a1b6-0497912dadd7-kube-api-access-c7gfh\") pod \"dashboard-metrics-scraper-6ffb444bf9-kzfq7\" (UID: \"1f883e74-021c-4514-a1b6-0497912dadd7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7"
	Nov 21 15:03:47 default-k8s-diff-port-124330 kubelet[788]: W1121 15:03:47.850330     788 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/crio-e0814e61a4311ff91969e04c10882f6cd782d563b8eca26fd1fe53f84644670a WatchSource:0}: Error finding container e0814e61a4311ff91969e04c10882f6cd782d563b8eca26fd1fe53f84644670a: Status 404 returned error can't find the container with id e0814e61a4311ff91969e04c10882f6cd782d563b8eca26fd1fe53f84644670a
	Nov 21 15:03:54 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:54.206742     788 scope.go:117] "RemoveContainer" containerID="79bb927e953c3dbd1c84fc6ed8d6dc4287bbff50ca50979787f1c3248354764e"
	Nov 21 15:03:55 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:55.211598     788 scope.go:117] "RemoveContainer" containerID="79bb927e953c3dbd1c84fc6ed8d6dc4287bbff50ca50979787f1c3248354764e"
	Nov 21 15:03:55 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:55.211905     788 scope.go:117] "RemoveContainer" containerID="20de9199729559ad0d9aba4b1ddb1571b04e053caa4e879499c52f0aee9e9d4e"
	Nov 21 15:03:55 default-k8s-diff-port-124330 kubelet[788]: E1121 15:03:55.212058     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kzfq7_kubernetes-dashboard(1f883e74-021c-4514-a1b6-0497912dadd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7" podUID="1f883e74-021c-4514-a1b6-0497912dadd7"
	Nov 21 15:03:56 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:56.216118     788 scope.go:117] "RemoveContainer" containerID="20de9199729559ad0d9aba4b1ddb1571b04e053caa4e879499c52f0aee9e9d4e"
	Nov 21 15:03:56 default-k8s-diff-port-124330 kubelet[788]: E1121 15:03:56.216268     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kzfq7_kubernetes-dashboard(1f883e74-021c-4514-a1b6-0497912dadd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7" podUID="1f883e74-021c-4514-a1b6-0497912dadd7"
	Nov 21 15:03:57 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:57.754141     788 scope.go:117] "RemoveContainer" containerID="20de9199729559ad0d9aba4b1ddb1571b04e053caa4e879499c52f0aee9e9d4e"
	Nov 21 15:03:57 default-k8s-diff-port-124330 kubelet[788]: E1121 15:03:57.754327     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kzfq7_kubernetes-dashboard(1f883e74-021c-4514-a1b6-0497912dadd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7" podUID="1f883e74-021c-4514-a1b6-0497912dadd7"
	Nov 21 15:04:10 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:10.733516     788 scope.go:117] "RemoveContainer" containerID="20de9199729559ad0d9aba4b1ddb1571b04e053caa4e879499c52f0aee9e9d4e"
	Nov 21 15:04:11 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:11.257356     788 scope.go:117] "RemoveContainer" containerID="20de9199729559ad0d9aba4b1ddb1571b04e053caa4e879499c52f0aee9e9d4e"
	Nov 21 15:04:11 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:11.257651     788 scope.go:117] "RemoveContainer" containerID="0df3c2682375bee0e258205c092887bac1830cecbcd0d95b1924532bfaa5484f"
	Nov 21 15:04:11 default-k8s-diff-port-124330 kubelet[788]: E1121 15:04:11.257806     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kzfq7_kubernetes-dashboard(1f883e74-021c-4514-a1b6-0497912dadd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7" podUID="1f883e74-021c-4514-a1b6-0497912dadd7"
	Nov 21 15:04:11 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:11.289547     788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8j9t6" podStartSLOduration=13.048951645 podStartE2EDuration="24.289529901s" podCreationTimestamp="2025-11-21 15:03:47 +0000 UTC" firstStartedPulling="2025-11-21 15:03:47.871784412 +0000 UTC m=+14.464648596" lastFinishedPulling="2025-11-21 15:03:59.112362676 +0000 UTC m=+25.705226852" observedRunningTime="2025-11-21 15:03:59.243188425 +0000 UTC m=+25.836052609" watchObservedRunningTime="2025-11-21 15:04:11.289529901 +0000 UTC m=+37.882394076"
	Nov 21 15:04:15 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:15.271384     788 scope.go:117] "RemoveContainer" containerID="5f0e5e46cc63025dfbd9f042185466d845b7c03f4a63e54afd5bb50b59c9f815"
	Nov 21 15:04:17 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:17.753914     788 scope.go:117] "RemoveContainer" containerID="0df3c2682375bee0e258205c092887bac1830cecbcd0d95b1924532bfaa5484f"
	Nov 21 15:04:17 default-k8s-diff-port-124330 kubelet[788]: E1121 15:04:17.754618     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kzfq7_kubernetes-dashboard(1f883e74-021c-4514-a1b6-0497912dadd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7" podUID="1f883e74-021c-4514-a1b6-0497912dadd7"
	Nov 21 15:04:29 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:29.735080     788 scope.go:117] "RemoveContainer" containerID="0df3c2682375bee0e258205c092887bac1830cecbcd0d95b1924532bfaa5484f"
	Nov 21 15:04:29 default-k8s-diff-port-124330 kubelet[788]: E1121 15:04:29.735412     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kzfq7_kubernetes-dashboard(1f883e74-021c-4514-a1b6-0497912dadd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7" podUID="1f883e74-021c-4514-a1b6-0497912dadd7"
	Nov 21 15:04:40 default-k8s-diff-port-124330 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 15:04:40 default-k8s-diff-port-124330 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 15:04:40 default-k8s-diff-port-124330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0328bfae2ad66c3fc1fcf7d24675c343d9b2c56f62faf1eb3ba8350ce1788d93] <==
	2025/11/21 15:03:59 Using namespace: kubernetes-dashboard
	2025/11/21 15:03:59 Using in-cluster config to connect to apiserver
	2025/11/21 15:03:59 Using secret token for csrf signing
	2025/11/21 15:03:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 15:03:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 15:03:59 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 15:03:59 Generating JWE encryption key
	2025/11/21 15:03:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 15:03:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 15:04:00 Initializing JWE encryption key from synchronized object
	2025/11/21 15:04:00 Creating in-cluster Sidecar client
	2025/11/21 15:04:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 15:04:00 Serving insecurely on HTTP port: 9090
	2025/11/21 15:04:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 15:03:59 Starting overwatch
	
	
	==> storage-provisioner [5f0e5e46cc63025dfbd9f042185466d845b7c03f4a63e54afd5bb50b59c9f815] <==
	I1121 15:03:45.119707       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 15:04:15.143485       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7b937ecab0a241d292f6754bcbd211657f52de9ee8071744759c18c71945d0db] <==
	I1121 15:04:15.397888       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 15:04:15.397938       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 15:04:15.400553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:18.855283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:23.115378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:26.713567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:29.775411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:32.797442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:32.802170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:04:32.802331       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 15:04:32.802500       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-124330_815719b2-3051-407b-8bdf-de3c0eb9d913!
	I1121 15:04:32.803592       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf0e76b3-7a61-453e-ad8b-291e224c4abe", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-124330_815719b2-3051-407b-8bdf-de3c0eb9d913 became leader
	W1121 15:04:32.808493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:32.814127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:04:32.903059       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-124330_815719b2-3051-407b-8bdf-de3c0eb9d913!
	W1121 15:04:34.816861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:34.823625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:36.827328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:36.832456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:38.839049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:38.851548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:40.855247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:40.863365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:42.866589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:42.871754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330: exit status 2 (373.69848ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-124330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-124330
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-124330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818",
	        "Created": "2025-11-21T15:01:40.035459408Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501969,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T15:03:25.142970088Z",
	            "FinishedAt": "2025-11-21T15:03:24.0338101Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/hostname",
	        "HostsPath": "/var/lib/docker/containers/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/hosts",
	        "LogPath": "/var/lib/docker/containers/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818-json.log",
	        "Name": "/default-k8s-diff-port-124330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-124330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-124330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818",
	                "LowerDir": "/var/lib/docker/overlay2/1ac9f699782810d5eb105621fe7efb90837a93f25caf0c55b80a0534d8bc54ae-init/diff:/var/lib/docker/overlay2/4bb50108edf048e257e14448f7bf5e72004402066df586355985da502f78efa4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1ac9f699782810d5eb105621fe7efb90837a93f25caf0c55b80a0534d8bc54ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1ac9f699782810d5eb105621fe7efb90837a93f25caf0c55b80a0534d8bc54ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1ac9f699782810d5eb105621fe7efb90837a93f25caf0c55b80a0534d8bc54ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-124330",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-124330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-124330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-124330",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-124330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10238c7f745c38df13197f304733401acc849a4a63d1bdb26f6964f39fbda4b9",
	            "SandboxKey": "/var/run/docker/netns/10238c7f745c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-124330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:47:2a:f2:6f:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "571375adbe67c8114c1253f4d87fb2a0f5ebbd2759db87cf3bcc3311dbadaf5e",
	                    "EndpointID": "75ae195c29d83c551553163a039646574b4b1c3caa2f1806dc5c5d0776dfd859",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-124330",
	                        "fad72cd6bedb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330: exit status 2 (392.843496ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-124330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-124330 logs -n 25: (1.517601789s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-844780 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p no-preload-844780                                                                                                                                                                                                                          │ no-preload-844780            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:02 UTC │
	│ image   │ embed-certs-902161 image list --format=json                                                                                                                                                                                                   │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ pause   │ -p embed-certs-902161 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │                     │
	│ delete  │ -p embed-certs-902161                                                                                                                                                                                                                         │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ delete  │ -p embed-certs-902161                                                                                                                                                                                                                         │ embed-certs-902161           │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:01 UTC │
	│ start   │ -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:01 UTC │ 21 Nov 25 15:02 UTC │
	│ addons  │ enable metrics-server -p newest-cni-714993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │                     │
	│ stop    │ -p newest-cni-714993 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-714993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ start   │ -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ image   │ newest-cni-714993 image list --format=json                                                                                                                                                                                                    │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │ 21 Nov 25 15:02 UTC │
	│ pause   │ -p newest-cni-714993 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:02 UTC │                     │
	│ delete  │ -p newest-cni-714993                                                                                                                                                                                                                          │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:03 UTC │
	│ delete  │ -p newest-cni-714993                                                                                                                                                                                                                          │ newest-cni-714993            │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:03 UTC │
	│ start   │ -p auto-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-609503                  │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:04 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-124330 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:03 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:03 UTC │
	│ start   │ -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:03 UTC │ 21 Nov 25 15:04 UTC │
	│ ssh     │ -p auto-609503 pgrep -a kubelet                                                                                                                                                                                                               │ auto-609503                  │ jenkins │ v1.37.0 │ 21 Nov 25 15:04 UTC │ 21 Nov 25 15:04 UTC │
	│ image   │ default-k8s-diff-port-124330 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:04 UTC │ 21 Nov 25 15:04 UTC │
	│ pause   │ -p default-k8s-diff-port-124330 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-124330 │ jenkins │ v1.37.0 │ 21 Nov 25 15:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 15:03:24
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 15:03:24.724792  501835 out.go:360] Setting OutFile to fd 1 ...
	I1121 15:03:24.725093  501835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:03:24.725134  501835 out.go:374] Setting ErrFile to fd 2...
	I1121 15:03:24.725154  501835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 15:03:24.725565  501835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 15:03:24.726118  501835 out.go:368] Setting JSON to false
	I1121 15:03:24.727469  501835 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9957,"bootTime":1763727448,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 15:03:24.727589  501835 start.go:143] virtualization:  
	I1121 15:03:24.733173  501835 out.go:179] * [default-k8s-diff-port-124330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 15:03:24.736546  501835 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 15:03:24.736636  501835 notify.go:221] Checking for updates...
	I1121 15:03:24.741745  501835 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 15:03:24.744724  501835 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:03:24.747571  501835 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 15:03:24.750402  501835 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 15:03:24.753352  501835 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 15:03:24.756634  501835 config.go:182] Loaded profile config "default-k8s-diff-port-124330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:03:24.757217  501835 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 15:03:24.794959  501835 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 15:03:24.795142  501835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:03:24.907091  501835 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 15:03:24.879378898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:03:24.907192  501835 docker.go:319] overlay module found
	I1121 15:03:24.910231  501835 out.go:179] * Using the docker driver based on existing profile
	I1121 15:03:24.913141  501835 start.go:309] selected driver: docker
	I1121 15:03:24.913158  501835 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-124330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:03:24.913252  501835 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 15:03:24.913962  501835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 15:03:25.012824  501835 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 15:03:24.995408534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 15:03:25.013201  501835 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:03:25.013228  501835 cni.go:84] Creating CNI manager for ""
	I1121 15:03:25.013282  501835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:03:25.013320  501835 start.go:353] cluster config:
	{Name:default-k8s-diff-port-124330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:03:25.016535  501835 out.go:179] * Starting "default-k8s-diff-port-124330" primary control-plane node in "default-k8s-diff-port-124330" cluster
	I1121 15:03:25.019314  501835 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 15:03:25.022311  501835 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 15:03:25.025114  501835 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:03:25.025176  501835 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 15:03:25.025199  501835 cache.go:65] Caching tarball of preloaded images
	I1121 15:03:25.025290  501835 preload.go:238] Found /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 15:03:25.025301  501835 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 15:03:25.025415  501835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/config.json ...
	I1121 15:03:25.025659  501835 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 15:03:25.059732  501835 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 15:03:25.059752  501835 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 15:03:25.059764  501835 cache.go:243] Successfully downloaded all kic artifacts
	I1121 15:03:25.059786  501835 start.go:360] acquireMachinesLock for default-k8s-diff-port-124330: {Name:mk8c422fee3dc1ab576ba87a9b21326872d469a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 15:03:25.059842  501835 start.go:364] duration metric: took 34.446µs to acquireMachinesLock for "default-k8s-diff-port-124330"
	I1121 15:03:25.059861  501835 start.go:96] Skipping create...Using existing machine configuration
	I1121 15:03:25.059866  501835 fix.go:54] fixHost starting: 
	I1121 15:03:25.060125  501835 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:03:25.110068  501835 fix.go:112] recreateIfNeeded on default-k8s-diff-port-124330: state=Stopped err=<nil>
	W1121 15:03:25.110100  501835 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 15:03:23.845153  499267 out.go:252]   - Generating certificates and keys ...
	I1121 15:03:23.845326  499267 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 15:03:23.845430  499267 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 15:03:24.654905  499267 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 15:03:25.098384  499267 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 15:03:25.787401  499267 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 15:03:26.395282  499267 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 15:03:27.033592  499267 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 15:03:27.033953  499267 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-609503 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 15:03:27.389067  499267 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 15:03:27.389429  499267 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-609503 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 15:03:25.113470  501835 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-124330" ...
	I1121 15:03:25.113571  501835 cli_runner.go:164] Run: docker start default-k8s-diff-port-124330
	I1121 15:03:25.416543  501835 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:03:25.448525  501835 kic.go:430] container "default-k8s-diff-port-124330" state is running.
	I1121 15:03:25.448914  501835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124330
	I1121 15:03:25.471453  501835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/config.json ...
	I1121 15:03:25.471678  501835 machine.go:94] provisionDockerMachine start ...
	I1121 15:03:25.471735  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:25.492868  501835 main.go:143] libmachine: Using SSH client type: native
	I1121 15:03:25.493189  501835 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1121 15:03:25.493198  501835 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 15:03:25.495564  501835 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 15:03:28.653889  501835 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124330
	
	I1121 15:03:28.653918  501835 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-124330"
	I1121 15:03:28.653988  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:28.683972  501835 main.go:143] libmachine: Using SSH client type: native
	I1121 15:03:28.684324  501835 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1121 15:03:28.684338  501835 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-124330 && echo "default-k8s-diff-port-124330" | sudo tee /etc/hostname
	I1121 15:03:28.855215  501835 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124330
	
	I1121 15:03:28.855288  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:28.878927  501835 main.go:143] libmachine: Using SSH client type: native
	I1121 15:03:28.879232  501835 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1121 15:03:28.879255  501835 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-124330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-124330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-124330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 15:03:29.037790  501835 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 15:03:29.037817  501835 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-289204/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-289204/.minikube}
	I1121 15:03:29.037858  501835 ubuntu.go:190] setting up certificates
	I1121 15:03:29.037869  501835 provision.go:84] configureAuth start
	I1121 15:03:29.037934  501835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124330
	I1121 15:03:29.058498  501835 provision.go:143] copyHostCerts
	I1121 15:03:29.058566  501835 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem, removing ...
	I1121 15:03:29.058588  501835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem
	I1121 15:03:29.058665  501835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/ca.pem (1078 bytes)
	I1121 15:03:29.058772  501835 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem, removing ...
	I1121 15:03:29.058783  501835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem
	I1121 15:03:29.058816  501835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/cert.pem (1123 bytes)
	I1121 15:03:29.058880  501835 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem, removing ...
	I1121 15:03:29.058890  501835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem
	I1121 15:03:29.058915  501835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-289204/.minikube/key.pem (1675 bytes)
	I1121 15:03:29.058980  501835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-124330 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-124330 localhost minikube]
	I1121 15:03:30.067263  501835 provision.go:177] copyRemoteCerts
	I1121 15:03:30.067356  501835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 15:03:30.067425  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:30.088803  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:30.195609  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 15:03:30.234692  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1121 15:03:30.267441  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 15:03:30.288545  501835 provision.go:87] duration metric: took 1.250648382s to configureAuth
	I1121 15:03:30.288617  501835 ubuntu.go:206] setting minikube options for container-runtime
	I1121 15:03:30.288838  501835 config.go:182] Loaded profile config "default-k8s-diff-port-124330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:03:30.288991  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:30.309172  501835 main.go:143] libmachine: Using SSH client type: native
	I1121 15:03:30.309483  501835 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1121 15:03:30.309497  501835 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 15:03:30.708355  501835 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 15:03:30.708459  501835 machine.go:97] duration metric: took 5.236770477s to provisionDockerMachine
	I1121 15:03:30.708485  501835 start.go:293] postStartSetup for "default-k8s-diff-port-124330" (driver="docker")
	I1121 15:03:30.708523  501835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 15:03:30.708603  501835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 15:03:30.708695  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:30.738192  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:30.861597  501835 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 15:03:30.867302  501835 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 15:03:30.867387  501835 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 15:03:30.867414  501835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/addons for local assets ...
	I1121 15:03:30.867521  501835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-289204/.minikube/files for local assets ...
	I1121 15:03:30.867697  501835 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem -> 2910602.pem in /etc/ssl/certs
	I1121 15:03:30.867909  501835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 15:03:30.879998  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:03:30.908855  501835 start.go:296] duration metric: took 200.317069ms for postStartSetup
	I1121 15:03:30.909053  501835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 15:03:30.909135  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:30.935479  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:31.050573  501835 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 15:03:31.056664  501835 fix.go:56] duration metric: took 5.996788768s for fixHost
	I1121 15:03:31.056741  501835 start.go:83] releasing machines lock for "default-k8s-diff-port-124330", held for 5.996890299s
	I1121 15:03:31.056849  501835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124330
	I1121 15:03:31.080267  501835 ssh_runner.go:195] Run: cat /version.json
	I1121 15:03:31.080333  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:31.080659  501835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 15:03:31.080723  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:31.120418  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:31.128658  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:31.326558  501835 ssh_runner.go:195] Run: systemctl --version
	I1121 15:03:31.333756  501835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 15:03:31.396835  501835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 15:03:31.401712  501835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 15:03:31.401783  501835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 15:03:31.410281  501835 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 15:03:31.410305  501835 start.go:496] detecting cgroup driver to use...
	I1121 15:03:31.410337  501835 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 15:03:31.410388  501835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 15:03:31.426746  501835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 15:03:31.441549  501835 docker.go:218] disabling cri-docker service (if available) ...
	I1121 15:03:31.441610  501835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 15:03:31.458256  501835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 15:03:31.472954  501835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 15:03:31.637607  501835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 15:03:31.780332  501835 docker.go:234] disabling docker service ...
	I1121 15:03:31.780476  501835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 15:03:31.799478  501835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 15:03:31.813879  501835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 15:03:31.989719  501835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 15:03:32.177126  501835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 15:03:32.195635  501835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 15:03:32.213150  501835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 15:03:32.213218  501835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.229048  501835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 15:03:32.229210  501835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.244353  501835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.257406  501835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.265953  501835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 15:03:32.273842  501835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.282818  501835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.293905  501835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 15:03:32.311242  501835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 15:03:32.322124  501835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 15:03:32.333134  501835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:03:32.490600  501835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 15:03:32.705088  501835 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 15:03:32.705162  501835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 15:03:32.720952  501835 start.go:564] Will wait 60s for crictl version
	I1121 15:03:32.721033  501835 ssh_runner.go:195] Run: which crictl
	I1121 15:03:32.725021  501835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 15:03:32.751639  501835 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 15:03:32.751721  501835 ssh_runner.go:195] Run: crio --version
	I1121 15:03:32.794823  501835 ssh_runner.go:195] Run: crio --version
	I1121 15:03:32.829127  501835 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 15:03:28.311429  499267 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 15:03:28.914535  499267 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 15:03:29.504795  499267 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 15:03:29.505358  499267 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 15:03:29.915689  499267 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 15:03:32.048832  499267 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 15:03:32.310399  499267 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 15:03:33.256910  499267 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 15:03:33.904183  499267 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 15:03:33.904284  499267 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 15:03:33.908709  499267 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 15:03:32.831887  501835 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-124330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 15:03:32.870294  501835 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 15:03:32.876532  501835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:03:32.893333  501835 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-124330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 15:03:32.893453  501835 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 15:03:32.893512  501835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:03:32.938007  501835 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:03:32.938028  501835 crio.go:433] Images already preloaded, skipping extraction
	I1121 15:03:32.938083  501835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 15:03:32.966810  501835 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 15:03:32.966885  501835 cache_images.go:86] Images are preloaded, skipping loading
	I1121 15:03:32.966907  501835 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1121 15:03:32.967056  501835 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-124330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 15:03:32.967185  501835 ssh_runner.go:195] Run: crio config
	I1121 15:03:33.100249  501835 cni.go:84] Creating CNI manager for ""
	I1121 15:03:33.100324  501835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:03:33.100353  501835 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 15:03:33.100415  501835 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-124330 NodeName:default-k8s-diff-port-124330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 15:03:33.100607  501835 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-124330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 15:03:33.100728  501835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 15:03:33.110391  501835 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 15:03:33.110516  501835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 15:03:33.118664  501835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1121 15:03:33.137398  501835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 15:03:33.151740  501835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1121 15:03:33.168144  501835 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 15:03:33.172100  501835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 15:03:33.182609  501835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:03:33.380420  501835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:03:33.407711  501835 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330 for IP: 192.168.85.2
	I1121 15:03:33.407738  501835 certs.go:195] generating shared ca certs ...
	I1121 15:03:33.407754  501835 certs.go:227] acquiring lock for ca certs: {Name:mkd94f7d03fff08336018db9da261a5400b4a828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:33.407888  501835 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key
	I1121 15:03:33.407929  501835 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key
	I1121 15:03:33.407951  501835 certs.go:257] generating profile certs ...
	I1121 15:03:33.408036  501835 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.key
	I1121 15:03:33.408105  501835 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/apiserver.key.00e0670e
	I1121 15:03:33.408148  501835 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/proxy-client.key
	I1121 15:03:33.408272  501835 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem (1338 bytes)
	W1121 15:03:33.408310  501835 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060_empty.pem, impossibly tiny 0 bytes
	I1121 15:03:33.408324  501835 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 15:03:33.408349  501835 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/ca.pem (1078 bytes)
	I1121 15:03:33.408376  501835 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/cert.pem (1123 bytes)
	I1121 15:03:33.408434  501835 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/certs/key.pem (1675 bytes)
	I1121 15:03:33.408480  501835 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem (1708 bytes)
	I1121 15:03:33.409046  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 15:03:33.428322  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 15:03:33.447292  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 15:03:33.468722  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 15:03:33.506711  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1121 15:03:33.540284  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 15:03:33.587981  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 15:03:33.640245  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 15:03:33.709663  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/certs/291060.pem --> /usr/share/ca-certificates/291060.pem (1338 bytes)
	I1121 15:03:33.753057  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/ssl/certs/2910602.pem --> /usr/share/ca-certificates/2910602.pem (1708 bytes)
	I1121 15:03:33.778780  501835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 15:03:33.799551  501835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 15:03:33.816770  501835 ssh_runner.go:195] Run: openssl version
	I1121 15:03:33.824365  501835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291060.pem && ln -fs /usr/share/ca-certificates/291060.pem /etc/ssl/certs/291060.pem"
	I1121 15:03:33.841811  501835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291060.pem
	I1121 15:03:33.847061  501835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/291060.pem
	I1121 15:03:33.847181  501835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291060.pem
	I1121 15:03:33.889416  501835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291060.pem /etc/ssl/certs/51391683.0"
	I1121 15:03:33.898251  501835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2910602.pem && ln -fs /usr/share/ca-certificates/2910602.pem /etc/ssl/certs/2910602.pem"
	I1121 15:03:33.908251  501835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2910602.pem
	I1121 15:03:33.913399  501835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/2910602.pem
	I1121 15:03:33.913780  501835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2910602.pem
	I1121 15:03:33.961573  501835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2910602.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 15:03:33.970487  501835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 15:03:33.981835  501835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:03:33.992051  501835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:03:33.992173  501835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 15:03:34.048606  501835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 15:03:34.058059  501835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 15:03:34.062396  501835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 15:03:34.115239  501835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 15:03:34.192267  501835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 15:03:34.297877  501835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 15:03:34.390990  501835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 15:03:34.613618  501835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 15:03:34.695657  501835 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-124330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-124330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 15:03:34.695755  501835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 15:03:34.695820  501835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 15:03:34.795540  501835 cri.go:89] found id: "ee9ac53aba59fc1e496aea56983c3d0c392cff161ea0a9c80336aaf6a3bb18d1"
	I1121 15:03:34.795562  501835 cri.go:89] found id: "f3450da3d6505714a2ddbd0849055e0c303889ab8fbf96ab66e5fb100167b3d0"
	I1121 15:03:34.795568  501835 cri.go:89] found id: "c627117ae55976c3bd9490f6441736eebc7000d2c50a16ac0fbd1824c9604beb"
	I1121 15:03:34.795571  501835 cri.go:89] found id: "8812c413c9de68d93c0162764f45b3d55f29007bce2646ce2fb79c02a7766a43"
	I1121 15:03:34.795577  501835 cri.go:89] found id: ""
	I1121 15:03:34.795626  501835 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 15:03:34.827356  501835 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T15:03:34Z" level=error msg="open /run/runc: no such file or directory"
	I1121 15:03:34.827438  501835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 15:03:34.847691  501835 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 15:03:34.847711  501835 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 15:03:34.847766  501835 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 15:03:34.865363  501835 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 15:03:34.865768  501835 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-124330" does not appear in /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:03:34.865874  501835 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-289204/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-124330" cluster setting kubeconfig missing "default-k8s-diff-port-124330" context setting]
	I1121 15:03:34.866146  501835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:34.867409  501835 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 15:03:34.886380  501835 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 15:03:34.886414  501835 kubeadm.go:602] duration metric: took 38.695677ms to restartPrimaryControlPlane
	I1121 15:03:34.886423  501835 kubeadm.go:403] duration metric: took 190.775682ms to StartCluster
	I1121 15:03:34.886438  501835 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:34.886497  501835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:03:34.887088  501835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:34.887288  501835 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:03:34.887552  501835 config.go:182] Loaded profile config "default-k8s-diff-port-124330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:03:34.887600  501835 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 15:03:34.887667  501835 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-124330"
	I1121 15:03:34.887684  501835 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-124330"
	W1121 15:03:34.887695  501835 addons.go:248] addon storage-provisioner should already be in state true
	I1121 15:03:34.887715  501835 host.go:66] Checking if "default-k8s-diff-port-124330" exists ...
	I1121 15:03:34.887744  501835 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-124330"
	I1121 15:03:34.887768  501835 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-124330"
	W1121 15:03:34.887774  501835 addons.go:248] addon dashboard should already be in state true
	I1121 15:03:34.887797  501835 host.go:66] Checking if "default-k8s-diff-port-124330" exists ...
	I1121 15:03:34.888144  501835 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:03:34.888236  501835 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:03:34.890740  501835 out.go:179] * Verifying Kubernetes components...
	I1121 15:03:34.890954  501835 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-124330"
	I1121 15:03:34.890978  501835 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-124330"
	I1121 15:03:34.891308  501835 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:03:34.894952  501835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:03:34.938036  501835 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 15:03:34.941032  501835 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:03:34.941054  501835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 15:03:34.941117  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:34.946096  501835 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-124330"
	W1121 15:03:34.946113  501835 addons.go:248] addon default-storageclass should already be in state true
	I1121 15:03:34.946139  501835 host.go:66] Checking if "default-k8s-diff-port-124330" exists ...
	I1121 15:03:34.946563  501835 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124330 --format={{.State.Status}}
	I1121 15:03:34.998339  501835 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 15:03:35.001255  501835 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1121 15:03:33.913427  499267 out.go:252]   - Booting up control plane ...
	I1121 15:03:33.913532  499267 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 15:03:33.913612  499267 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 15:03:33.914836  499267 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 15:03:33.933345  499267 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 15:03:33.933461  499267 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 15:03:33.944830  499267 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 15:03:33.944934  499267 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 15:03:33.944976  499267 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 15:03:34.103938  499267 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 15:03:34.104068  499267 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 15:03:36.105401  499267 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001786483s
	I1121 15:03:36.109196  499267 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 15:03:36.109300  499267 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1121 15:03:36.109866  499267 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 15:03:36.109965  499267 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 15:03:35.001251  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:35.004475  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 15:03:35.004503  501835 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 15:03:35.004579  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:35.018079  501835 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 15:03:35.018103  501835 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 15:03:35.018173  501835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124330
	I1121 15:03:35.043393  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:35.060483  501835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/default-k8s-diff-port-124330/id_rsa Username:docker}
	I1121 15:03:35.417414  501835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:03:35.434158  501835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:03:35.465762  501835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 15:03:35.499544  501835 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-124330" to be "Ready" ...
	I1121 15:03:35.509985  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 15:03:35.510006  501835 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 15:03:35.621968  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 15:03:35.621996  501835 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 15:03:35.757447  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 15:03:35.757467  501835 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 15:03:35.882601  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 15:03:35.882623  501835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 15:03:35.961540  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 15:03:35.961605  501835 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 15:03:35.987901  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 15:03:35.987968  501835 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 15:03:36.005803  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 15:03:36.005880  501835 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 15:03:36.033885  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 15:03:36.033951  501835 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 15:03:36.060067  501835 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 15:03:36.060133  501835 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 15:03:36.088109  501835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 15:03:41.040941  499267 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.931244834s
	I1121 15:03:41.758765  501835 node_ready.go:49] node "default-k8s-diff-port-124330" is "Ready"
	I1121 15:03:41.758804  501835 node_ready.go:38] duration metric: took 6.259160577s for node "default-k8s-diff-port-124330" to be "Ready" ...
	I1121 15:03:41.758819  501835 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:03:41.758885  501835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:03:45.482718  501835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.048475211s)
	I1121 15:03:45.482784  501835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.016958752s)
	I1121 15:03:45.840467  501835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.7522517s)
	I1121 15:03:45.840504  501835 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.081592s)
	I1121 15:03:45.840528  501835 api_server.go:72] duration metric: took 10.953212337s to wait for apiserver process to appear ...
	I1121 15:03:45.840534  501835 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:03:45.840628  501835 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1121 15:03:45.843347  501835 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-124330 addons enable metrics-server
	
	I1121 15:03:45.846288  501835 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1121 15:03:45.891741  499267 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.782536095s
	I1121 15:03:47.610937  499267 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.501408365s
	I1121 15:03:47.639387  499267 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 15:03:47.666140  499267 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 15:03:47.682764  499267 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 15:03:47.683231  499267 kubeadm.go:319] [mark-control-plane] Marking the node auto-609503 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 15:03:47.696780  499267 kubeadm.go:319] [bootstrap-token] Using token: 3hic3h.1sikr0fxhzk10e38
	I1121 15:03:47.699667  499267 out.go:252]   - Configuring RBAC rules ...
	I1121 15:03:47.699791  499267 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 15:03:47.707820  499267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 15:03:47.716290  499267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 15:03:47.721196  499267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 15:03:47.725845  499267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 15:03:47.730058  499267 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 15:03:48.018423  499267 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 15:03:48.469673  499267 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 15:03:49.033547  499267 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 15:03:49.035740  499267 kubeadm.go:319] 
	I1121 15:03:49.035825  499267 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 15:03:49.035832  499267 kubeadm.go:319] 
	I1121 15:03:49.035918  499267 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 15:03:49.035928  499267 kubeadm.go:319] 
	I1121 15:03:49.035954  499267 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 15:03:49.036177  499267 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 15:03:49.036241  499267 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 15:03:49.036247  499267 kubeadm.go:319] 
	I1121 15:03:49.036304  499267 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 15:03:49.036308  499267 kubeadm.go:319] 
	I1121 15:03:49.036375  499267 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 15:03:49.036403  499267 kubeadm.go:319] 
	I1121 15:03:49.036459  499267 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 15:03:49.036546  499267 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 15:03:49.036622  499267 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 15:03:49.036630  499267 kubeadm.go:319] 
	I1121 15:03:49.037000  499267 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 15:03:49.037129  499267 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 15:03:49.037136  499267 kubeadm.go:319] 
	I1121 15:03:49.038962  499267 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3hic3h.1sikr0fxhzk10e38 \
	I1121 15:03:49.039081  499267 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 \
	I1121 15:03:49.039284  499267 kubeadm.go:319] 	--control-plane 
	I1121 15:03:49.039295  499267 kubeadm.go:319] 
	I1121 15:03:49.039526  499267 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 15:03:49.039536  499267 kubeadm.go:319] 
	I1121 15:03:49.039807  499267 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3hic3h.1sikr0fxhzk10e38 \
	I1121 15:03:49.040091  499267 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6fe5ac5e58e978ea9557e16190af072600ab8f16d36d1c1a598a4894130bac92 
	I1121 15:03:49.054813  499267 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 15:03:49.055050  499267 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 15:03:49.055160  499267 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 15:03:49.055175  499267 cni.go:84] Creating CNI manager for ""
	I1121 15:03:49.055182  499267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 15:03:49.058415  499267 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 15:03:45.849173  501835 addons.go:530] duration metric: took 10.961554415s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1121 15:03:45.863745  501835 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1121 15:03:45.873401  501835 api_server.go:141] control plane version: v1.34.1
	I1121 15:03:45.873429  501835 api_server.go:131] duration metric: took 32.815723ms to wait for apiserver health ...
	I1121 15:03:45.873438  501835 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:03:45.885593  501835 system_pods.go:59] 8 kube-system pods found
	I1121 15:03:45.885629  501835 system_pods.go:61] "coredns-66bc5c9577-zhrs7" [6d450543-7e6c-43d8-93ac-9ceca2afe29a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:03:45.885638  501835 system_pods.go:61] "etcd-default-k8s-diff-port-124330" [8e827f48-9cc4-469d-a51a-af4fcfbff43f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:03:45.885647  501835 system_pods.go:61] "kindnet-wdpnm" [8808169a-c3a4-4b7c-8703-356c5678bb6a] Running
	I1121 15:03:45.885652  501835 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-124330" [a9842c68-c43c-4c9c-bcc6-f9278c853ba1] Running
	I1121 15:03:45.885656  501835 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-124330" [c388eb67-dcdf-480e-bd3e-d2e7dda823c2] Running
	I1121 15:03:45.885661  501835 system_pods.go:61] "kube-proxy-fr5df" [968146ae-c634-4d71-88d9-dd180b847494] Running
	I1121 15:03:45.885667  501835 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-124330" [0b217514-f104-4cb6-88bf-36c746a3fff2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:03:45.885671  501835 system_pods.go:61] "storage-provisioner" [72853767-c110-4974-813d-a43eb4ea90a6] Running
	I1121 15:03:45.885686  501835 system_pods.go:74] duration metric: took 12.232831ms to wait for pod list to return data ...
	I1121 15:03:45.885694  501835 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:03:45.898790  501835 default_sa.go:45] found service account: "default"
	I1121 15:03:45.898821  501835 default_sa.go:55] duration metric: took 13.120702ms for default service account to be created ...
	I1121 15:03:45.898831  501835 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 15:03:45.906212  501835 system_pods.go:86] 8 kube-system pods found
	I1121 15:03:45.906259  501835 system_pods.go:89] "coredns-66bc5c9577-zhrs7" [6d450543-7e6c-43d8-93ac-9ceca2afe29a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:03:45.906269  501835 system_pods.go:89] "etcd-default-k8s-diff-port-124330" [8e827f48-9cc4-469d-a51a-af4fcfbff43f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 15:03:45.906275  501835 system_pods.go:89] "kindnet-wdpnm" [8808169a-c3a4-4b7c-8703-356c5678bb6a] Running
	I1121 15:03:45.906280  501835 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-124330" [a9842c68-c43c-4c9c-bcc6-f9278c853ba1] Running
	I1121 15:03:45.906285  501835 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-124330" [c388eb67-dcdf-480e-bd3e-d2e7dda823c2] Running
	I1121 15:03:45.906289  501835 system_pods.go:89] "kube-proxy-fr5df" [968146ae-c634-4d71-88d9-dd180b847494] Running
	I1121 15:03:45.906295  501835 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-124330" [0b217514-f104-4cb6-88bf-36c746a3fff2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 15:03:45.906300  501835 system_pods.go:89] "storage-provisioner" [72853767-c110-4974-813d-a43eb4ea90a6] Running
	I1121 15:03:45.906309  501835 system_pods.go:126] duration metric: took 7.471569ms to wait for k8s-apps to be running ...
	I1121 15:03:45.906322  501835 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 15:03:45.906379  501835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:03:45.929802  501835 system_svc.go:56] duration metric: took 23.470336ms WaitForService to wait for kubelet
	I1121 15:03:45.929833  501835 kubeadm.go:587] duration metric: took 11.042515893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:03:45.929851  501835 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:03:45.947000  501835 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:03:45.947037  501835 node_conditions.go:123] node cpu capacity is 2
	I1121 15:03:45.947050  501835 node_conditions.go:105] duration metric: took 17.19233ms to run NodePressure ...
	I1121 15:03:45.947063  501835 start.go:242] waiting for startup goroutines ...
	I1121 15:03:45.947106  501835 start.go:247] waiting for cluster config update ...
	I1121 15:03:45.947124  501835 start.go:256] writing updated cluster config ...
	I1121 15:03:45.947466  501835 ssh_runner.go:195] Run: rm -f paused
	I1121 15:03:45.951661  501835 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:03:45.956788  501835 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zhrs7" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 15:03:47.963037  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	I1121 15:03:49.061477  499267 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 15:03:49.070478  499267 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 15:03:49.070549  499267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 15:03:49.115582  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 15:03:49.704040  499267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 15:03:49.704171  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:49.704251  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-609503 minikube.k8s.io/updated_at=2025_11_21T15_03_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=auto-609503 minikube.k8s.io/primary=true
	I1121 15:03:50.122064  499267 ops.go:34] apiserver oom_adj: -16
	I1121 15:03:50.122168  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:50.622542  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:51.122275  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:51.623262  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:52.122212  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:52.623088  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:53.122758  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:53.622684  499267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 15:03:53.743035  499267 kubeadm.go:1114] duration metric: took 4.038910572s to wait for elevateKubeSystemPrivileges
	I1121 15:03:53.743140  499267 kubeadm.go:403] duration metric: took 30.156788323s to StartCluster
	I1121 15:03:53.743162  499267 settings.go:142] acquiring lock: {Name:mkf76fd3ef2c30c8980aacc36945e2f280922fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:53.743272  499267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 15:03:53.744274  499267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/kubeconfig: {Name:mk16490170000e2914b9e5316404a6e7a2e15e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 15:03:53.744586  499267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 15:03:53.744585  499267 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 15:03:53.744872  499267 config.go:182] Loaded profile config "auto-609503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 15:03:53.744911  499267 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 15:03:53.744973  499267 addons.go:70] Setting storage-provisioner=true in profile "auto-609503"
	I1121 15:03:53.744986  499267 addons.go:239] Setting addon storage-provisioner=true in "auto-609503"
	I1121 15:03:53.745026  499267 host.go:66] Checking if "auto-609503" exists ...
	I1121 15:03:53.745484  499267 cli_runner.go:164] Run: docker container inspect auto-609503 --format={{.State.Status}}
	I1121 15:03:53.745998  499267 addons.go:70] Setting default-storageclass=true in profile "auto-609503"
	I1121 15:03:53.746016  499267 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-609503"
	I1121 15:03:53.746288  499267 cli_runner.go:164] Run: docker container inspect auto-609503 --format={{.State.Status}}
	I1121 15:03:53.748609  499267 out.go:179] * Verifying Kubernetes components...
	I1121 15:03:53.751840  499267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 15:03:53.798897  499267 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1121 15:03:49.964155  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:03:52.464929  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:03:54.465954  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	I1121 15:03:53.802786  499267 addons.go:239] Setting addon default-storageclass=true in "auto-609503"
	I1121 15:03:53.802828  499267 host.go:66] Checking if "auto-609503" exists ...
	I1121 15:03:53.803233  499267 cli_runner.go:164] Run: docker container inspect auto-609503 --format={{.State.Status}}
	I1121 15:03:53.803571  499267 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:03:53.803585  499267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 15:03:53.803633  499267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-609503
	I1121 15:03:53.848001  499267 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 15:03:53.848033  499267 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 15:03:53.848109  499267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-609503
	I1121 15:03:53.860236  499267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/auto-609503/id_rsa Username:docker}
	I1121 15:03:53.884800  499267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/auto-609503/id_rsa Username:docker}
	I1121 15:03:54.312445  499267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 15:03:54.508883  499267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 15:03:54.565891  499267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 15:03:54.566079  499267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 15:03:55.801325  499267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.292353349s)
	I1121 15:03:55.801428  499267 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.235317302s)
	I1121 15:03:55.801445  499267 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.235487757s)
	I1121 15:03:55.802863  499267 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1121 15:03:55.802910  499267 node_ready.go:35] waiting up to 15m0s for node "auto-609503" to be "Ready" ...
	I1121 15:03:55.801479  499267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.488933894s)
	I1121 15:03:55.886123  499267 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 15:03:55.889082  499267 addons.go:530] duration metric: took 2.144145193s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 15:03:56.307479  499267 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-609503" context rescaled to 1 replicas
	W1121 15:03:57.806337  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:03:56.962481  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:03:58.963149  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:03:59.806492  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:01.806798  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:00.963328  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:02.972976  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:04.307513  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:06.806410  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:05.462350  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:07.462559  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:08.806689  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:11.306403  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:09.963163  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:11.964992  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:14.463134  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:13.306584  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:15.806323  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:16.965548  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:19.462399  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:18.306580  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:20.806162  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:21.462682  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:23.962660  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	W1121 15:04:25.962759  501835 pod_ready.go:104] pod "coredns-66bc5c9577-zhrs7" is not "Ready", error: <nil>
	I1121 15:04:26.462590  501835 pod_ready.go:94] pod "coredns-66bc5c9577-zhrs7" is "Ready"
	I1121 15:04:26.462621  501835 pod_ready.go:86] duration metric: took 40.505803075s for pod "coredns-66bc5c9577-zhrs7" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.465456  501835 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.469910  501835 pod_ready.go:94] pod "etcd-default-k8s-diff-port-124330" is "Ready"
	I1121 15:04:26.469942  501835 pod_ready.go:86] duration metric: took 4.458144ms for pod "etcd-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.472604  501835 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.477324  501835 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-124330" is "Ready"
	I1121 15:04:26.477354  501835 pod_ready.go:86] duration metric: took 4.72419ms for pod "kube-apiserver-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.479989  501835 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.660283  501835 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-124330" is "Ready"
	I1121 15:04:26.660309  501835 pod_ready.go:86] duration metric: took 180.293131ms for pod "kube-controller-manager-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:26.860669  501835 pod_ready.go:83] waiting for pod "kube-proxy-fr5df" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:27.259919  501835 pod_ready.go:94] pod "kube-proxy-fr5df" is "Ready"
	I1121 15:04:27.259952  501835 pod_ready.go:86] duration metric: took 399.256376ms for pod "kube-proxy-fr5df" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:27.460148  501835 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:27.860506  501835 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-124330" is "Ready"
	I1121 15:04:27.860534  501835 pod_ready.go:86] duration metric: took 400.3565ms for pod "kube-scheduler-default-k8s-diff-port-124330" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:27.860545  501835 pod_ready.go:40] duration metric: took 41.908852364s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:04:27.912609  501835 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 15:04:27.917936  501835 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-124330" cluster and "default" namespace by default
	W1121 15:04:23.306373  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:25.306832  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:27.806279  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:30.306792  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	W1121 15:04:32.805939  499267 node_ready.go:57] node "auto-609503" has "Ready":"False" status (will retry)
	I1121 15:04:34.306321  499267 node_ready.go:49] node "auto-609503" is "Ready"
	I1121 15:04:34.306355  499267 node_ready.go:38] duration metric: took 38.503410571s for node "auto-609503" to be "Ready" ...
	I1121 15:04:34.306375  499267 api_server.go:52] waiting for apiserver process to appear ...
	I1121 15:04:34.306439  499267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 15:04:34.325191  499267 api_server.go:72] duration metric: took 40.580576364s to wait for apiserver process to appear ...
	I1121 15:04:34.325218  499267 api_server.go:88] waiting for apiserver healthz status ...
	I1121 15:04:34.325239  499267 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 15:04:34.334733  499267 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 15:04:34.336305  499267 api_server.go:141] control plane version: v1.34.1
	I1121 15:04:34.336329  499267 api_server.go:131] duration metric: took 11.102963ms to wait for apiserver health ...
	I1121 15:04:34.336338  499267 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 15:04:34.341711  499267 system_pods.go:59] 8 kube-system pods found
	I1121 15:04:34.341748  499267 system_pods.go:61] "coredns-66bc5c9577-t8cmt" [cfe3d599-f8e6-439d-ba53-ed8c41d0ec68] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:04:34.341755  499267 system_pods.go:61] "etcd-auto-609503" [1643ef90-f284-407a-ba53-40b3cb0ce799] Running
	I1121 15:04:34.341760  499267 system_pods.go:61] "kindnet-kwthc" [ee54de0c-419d-4b2c-ad50-c1645cdff6b2] Running
	I1121 15:04:34.341765  499267 system_pods.go:61] "kube-apiserver-auto-609503" [9fcd9ac6-1199-4a30-934a-c299263f0683] Running
	I1121 15:04:34.341769  499267 system_pods.go:61] "kube-controller-manager-auto-609503" [bbc7db4e-9280-4bea-9ef2-0da7a644cc7b] Running
	I1121 15:04:34.341773  499267 system_pods.go:61] "kube-proxy-7wgzz" [340a0657-b877-4de1-aaa5-65e4aa99fd68] Running
	I1121 15:04:34.341777  499267 system_pods.go:61] "kube-scheduler-auto-609503" [6a3d7cb9-adf5-40a8-8663-03681e985f47] Running
	I1121 15:04:34.341783  499267 system_pods.go:61] "storage-provisioner" [52dff593-b9aa-4dd0-856b-03bcc6136a13] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:04:34.341789  499267 system_pods.go:74] duration metric: took 5.444888ms to wait for pod list to return data ...
	I1121 15:04:34.341797  499267 default_sa.go:34] waiting for default service account to be created ...
	I1121 15:04:34.351651  499267 default_sa.go:45] found service account: "default"
	I1121 15:04:34.351678  499267 default_sa.go:55] duration metric: took 9.874017ms for default service account to be created ...
	I1121 15:04:34.351689  499267 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 15:04:34.358211  499267 system_pods.go:86] 8 kube-system pods found
	I1121 15:04:34.358243  499267 system_pods.go:89] "coredns-66bc5c9577-t8cmt" [cfe3d599-f8e6-439d-ba53-ed8c41d0ec68] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:04:34.358250  499267 system_pods.go:89] "etcd-auto-609503" [1643ef90-f284-407a-ba53-40b3cb0ce799] Running
	I1121 15:04:34.358256  499267 system_pods.go:89] "kindnet-kwthc" [ee54de0c-419d-4b2c-ad50-c1645cdff6b2] Running
	I1121 15:04:34.358260  499267 system_pods.go:89] "kube-apiserver-auto-609503" [9fcd9ac6-1199-4a30-934a-c299263f0683] Running
	I1121 15:04:34.358264  499267 system_pods.go:89] "kube-controller-manager-auto-609503" [bbc7db4e-9280-4bea-9ef2-0da7a644cc7b] Running
	I1121 15:04:34.358268  499267 system_pods.go:89] "kube-proxy-7wgzz" [340a0657-b877-4de1-aaa5-65e4aa99fd68] Running
	I1121 15:04:34.358272  499267 system_pods.go:89] "kube-scheduler-auto-609503" [6a3d7cb9-adf5-40a8-8663-03681e985f47] Running
	I1121 15:04:34.358277  499267 system_pods.go:89] "storage-provisioner" [52dff593-b9aa-4dd0-856b-03bcc6136a13] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:04:34.358305  499267 retry.go:31] will retry after 259.035465ms: missing components: kube-dns
	I1121 15:04:34.626392  499267 system_pods.go:86] 8 kube-system pods found
	I1121 15:04:34.626429  499267 system_pods.go:89] "coredns-66bc5c9577-t8cmt" [cfe3d599-f8e6-439d-ba53-ed8c41d0ec68] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:04:34.626437  499267 system_pods.go:89] "etcd-auto-609503" [1643ef90-f284-407a-ba53-40b3cb0ce799] Running
	I1121 15:04:34.626443  499267 system_pods.go:89] "kindnet-kwthc" [ee54de0c-419d-4b2c-ad50-c1645cdff6b2] Running
	I1121 15:04:34.626448  499267 system_pods.go:89] "kube-apiserver-auto-609503" [9fcd9ac6-1199-4a30-934a-c299263f0683] Running
	I1121 15:04:34.626452  499267 system_pods.go:89] "kube-controller-manager-auto-609503" [bbc7db4e-9280-4bea-9ef2-0da7a644cc7b] Running
	I1121 15:04:34.626457  499267 system_pods.go:89] "kube-proxy-7wgzz" [340a0657-b877-4de1-aaa5-65e4aa99fd68] Running
	I1121 15:04:34.626461  499267 system_pods.go:89] "kube-scheduler-auto-609503" [6a3d7cb9-adf5-40a8-8663-03681e985f47] Running
	I1121 15:04:34.626467  499267 system_pods.go:89] "storage-provisioner" [52dff593-b9aa-4dd0-856b-03bcc6136a13] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:04:34.626484  499267 retry.go:31] will retry after 253.677534ms: missing components: kube-dns
	I1121 15:04:34.884608  499267 system_pods.go:86] 8 kube-system pods found
	I1121 15:04:34.884644  499267 system_pods.go:89] "coredns-66bc5c9577-t8cmt" [cfe3d599-f8e6-439d-ba53-ed8c41d0ec68] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 15:04:34.884651  499267 system_pods.go:89] "etcd-auto-609503" [1643ef90-f284-407a-ba53-40b3cb0ce799] Running
	I1121 15:04:34.884658  499267 system_pods.go:89] "kindnet-kwthc" [ee54de0c-419d-4b2c-ad50-c1645cdff6b2] Running
	I1121 15:04:34.884662  499267 system_pods.go:89] "kube-apiserver-auto-609503" [9fcd9ac6-1199-4a30-934a-c299263f0683] Running
	I1121 15:04:34.884666  499267 system_pods.go:89] "kube-controller-manager-auto-609503" [bbc7db4e-9280-4bea-9ef2-0da7a644cc7b] Running
	I1121 15:04:34.884671  499267 system_pods.go:89] "kube-proxy-7wgzz" [340a0657-b877-4de1-aaa5-65e4aa99fd68] Running
	I1121 15:04:34.884674  499267 system_pods.go:89] "kube-scheduler-auto-609503" [6a3d7cb9-adf5-40a8-8663-03681e985f47] Running
	I1121 15:04:34.884680  499267 system_pods.go:89] "storage-provisioner" [52dff593-b9aa-4dd0-856b-03bcc6136a13] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 15:04:34.884695  499267 retry.go:31] will retry after 337.98202ms: missing components: kube-dns
	I1121 15:04:35.227360  499267 system_pods.go:86] 8 kube-system pods found
	I1121 15:04:35.227391  499267 system_pods.go:89] "coredns-66bc5c9577-t8cmt" [cfe3d599-f8e6-439d-ba53-ed8c41d0ec68] Running
	I1121 15:04:35.227399  499267 system_pods.go:89] "etcd-auto-609503" [1643ef90-f284-407a-ba53-40b3cb0ce799] Running
	I1121 15:04:35.227403  499267 system_pods.go:89] "kindnet-kwthc" [ee54de0c-419d-4b2c-ad50-c1645cdff6b2] Running
	I1121 15:04:35.227407  499267 system_pods.go:89] "kube-apiserver-auto-609503" [9fcd9ac6-1199-4a30-934a-c299263f0683] Running
	I1121 15:04:35.227411  499267 system_pods.go:89] "kube-controller-manager-auto-609503" [bbc7db4e-9280-4bea-9ef2-0da7a644cc7b] Running
	I1121 15:04:35.227415  499267 system_pods.go:89] "kube-proxy-7wgzz" [340a0657-b877-4de1-aaa5-65e4aa99fd68] Running
	I1121 15:04:35.227428  499267 system_pods.go:89] "kube-scheduler-auto-609503" [6a3d7cb9-adf5-40a8-8663-03681e985f47] Running
	I1121 15:04:35.227436  499267 system_pods.go:89] "storage-provisioner" [52dff593-b9aa-4dd0-856b-03bcc6136a13] Running
	I1121 15:04:35.227444  499267 system_pods.go:126] duration metric: took 875.749019ms to wait for k8s-apps to be running ...
	I1121 15:04:35.227458  499267 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 15:04:35.227530  499267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 15:04:35.241653  499267 system_svc.go:56] duration metric: took 14.18468ms WaitForService to wait for kubelet
	I1121 15:04:35.241682  499267 kubeadm.go:587] duration metric: took 41.497072042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 15:04:35.241706  499267 node_conditions.go:102] verifying NodePressure condition ...
	I1121 15:04:35.247564  499267 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 15:04:35.247596  499267 node_conditions.go:123] node cpu capacity is 2
	I1121 15:04:35.247609  499267 node_conditions.go:105] duration metric: took 5.896996ms to run NodePressure ...
	I1121 15:04:35.247622  499267 start.go:242] waiting for startup goroutines ...
	I1121 15:04:35.247629  499267 start.go:247] waiting for cluster config update ...
	I1121 15:04:35.247640  499267 start.go:256] writing updated cluster config ...
	I1121 15:04:35.247936  499267 ssh_runner.go:195] Run: rm -f paused
	I1121 15:04:35.253279  499267 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:04:35.257342  499267 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t8cmt" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.262515  499267 pod_ready.go:94] pod "coredns-66bc5c9577-t8cmt" is "Ready"
	I1121 15:04:35.262559  499267 pod_ready.go:86] duration metric: took 5.18685ms for pod "coredns-66bc5c9577-t8cmt" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.265789  499267 pod_ready.go:83] waiting for pod "etcd-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.271173  499267 pod_ready.go:94] pod "etcd-auto-609503" is "Ready"
	I1121 15:04:35.271207  499267 pod_ready.go:86] duration metric: took 5.391758ms for pod "etcd-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.273705  499267 pod_ready.go:83] waiting for pod "kube-apiserver-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.282211  499267 pod_ready.go:94] pod "kube-apiserver-auto-609503" is "Ready"
	I1121 15:04:35.282282  499267 pod_ready.go:86] duration metric: took 8.549144ms for pod "kube-apiserver-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.285052  499267 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.657420  499267 pod_ready.go:94] pod "kube-controller-manager-auto-609503" is "Ready"
	I1121 15:04:35.657450  499267 pod_ready.go:86] duration metric: took 372.370879ms for pod "kube-controller-manager-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:35.857853  499267 pod_ready.go:83] waiting for pod "kube-proxy-7wgzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:36.257605  499267 pod_ready.go:94] pod "kube-proxy-7wgzz" is "Ready"
	I1121 15:04:36.257689  499267 pod_ready.go:86] duration metric: took 399.809504ms for pod "kube-proxy-7wgzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:36.458314  499267 pod_ready.go:83] waiting for pod "kube-scheduler-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:36.857596  499267 pod_ready.go:94] pod "kube-scheduler-auto-609503" is "Ready"
	I1121 15:04:36.857621  499267 pod_ready.go:86] duration metric: took 399.218613ms for pod "kube-scheduler-auto-609503" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 15:04:36.857634  499267 pod_ready.go:40] duration metric: took 1.604279707s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 15:04:36.921467  499267 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 15:04:36.924644  499267 out.go:179] * Done! kubectl is now configured to use "auto-609503" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.283823324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.283995478Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/11b70626da0821f486d59c974314c4558b518975f50376f3c7636fb7a9730b48/merged/etc/passwd: no such file or directory"
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.284015662Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/11b70626da0821f486d59c974314c4558b518975f50376f3c7636fb7a9730b48/merged/etc/group: no such file or directory"
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.284249232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.306121911Z" level=info msg="Created container 7b937ecab0a241d292f6754bcbd211657f52de9ee8071744759c18c71945d0db: kube-system/storage-provisioner/storage-provisioner" id=9518084d-b665-44a7-90df-5bd29805dcba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.307065734Z" level=info msg="Starting container: 7b937ecab0a241d292f6754bcbd211657f52de9ee8071744759c18c71945d0db" id=6d74a156-cf2a-49b2-a30c-d2afeda038a9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 15:04:15 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:15.309302609Z" level=info msg="Started container" PID=1650 containerID=7b937ecab0a241d292f6754bcbd211657f52de9ee8071744759c18c71945d0db description=kube-system/storage-provisioner/storage-provisioner id=6d74a156-cf2a-49b2-a30c-d2afeda038a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8aaa62b4ab20edb90bbff8b67d6117b1cc9917bd6dc99bd665902437a3bd8790
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.430861431Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.4346358Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.4346732Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.434696503Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.439483101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.439516932Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.439540177Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.442901766Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.442939281Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.442962108Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.445909719Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.445942967Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.445963595Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.449616839Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.449650702Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.449674136Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.453323688Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 15:04:24 default-k8s-diff-port-124330 crio[657]: time="2025-11-21T15:04:24.453357058Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	7b937ecab0a24       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           30 seconds ago       Running             storage-provisioner         2                   8aaa62b4ab20e       storage-provisioner                                    kube-system
	0df3c2682375b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           34 seconds ago       Exited              dashboard-metrics-scraper   2                   7719e009831c0       dashboard-metrics-scraper-6ffb444bf9-kzfq7             kubernetes-dashboard
	0328bfae2ad66       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   e0814e61a4311       kubernetes-dashboard-855c9754f9-8j9t6                  kubernetes-dashboard
	1b103ed62372e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   7e073d75b5257       busybox                                                default
	5b113bd590fba       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   437f63aec44b1       coredns-66bc5c9577-zhrs7                               kube-system
	c883b96946c23       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   e79f03ba22fa3       kube-proxy-fr5df                                       kube-system
	5f0e5e46cc630       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   8aaa62b4ab20e       storage-provisioner                                    kube-system
	8542c4d9705bc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   c84229870d8b5       kindnet-wdpnm                                          kube-system
	ee9ac53aba59f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   a2d7008d1f4aa       kube-controller-manager-default-k8s-diff-port-124330   kube-system
	f3450da3d6505       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   813c975080bb7       kube-apiserver-default-k8s-diff-port-124330            kube-system
	c627117ae5597       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   ddc088e739d2d       kube-scheduler-default-k8s-diff-port-124330            kube-system
	8812c413c9de6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   7d0f9169385a0       etcd-default-k8s-diff-port-124330                      kube-system
	
	
	==> coredns [5b113bd590fba8ccf5296efcebfb784b1cf6e565590f72d8fad43cb29967ffdc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55343 - 52381 "HINFO IN 3673569578872881866.5877170825921050816. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019226274s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-124330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-124330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=default-k8s-diff-port-124330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T15_02_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 15:02:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-124330
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 15:04:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 15:04:23 +0000   Fri, 21 Nov 2025 15:02:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 15:04:23 +0000   Fri, 21 Nov 2025 15:02:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 15:04:23 +0000   Fri, 21 Nov 2025 15:02:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 15:04:23 +0000   Fri, 21 Nov 2025 15:02:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-124330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                6a639b89-86eb-4814-8ac4-d429830f770c
	  Boot ID:                    7c29c371-e39f-4a18-af7c-1ed33287cef3
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 coredns-66bc5c9577-zhrs7                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m30s
	  kube-system                 etcd-default-k8s-diff-port-124330                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m35s
	  kube-system                 kindnet-wdpnm                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m30s
	  kube-system                 kube-apiserver-default-k8s-diff-port-124330             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-124330    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-proxy-fr5df                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-scheduler-default-k8s-diff-port-124330             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kzfq7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8j9t6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m28s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m45s (x8 over 2m46s)  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m45s (x8 over 2m46s)  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m45s (x8 over 2m46s)  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m35s                  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s                  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m35s                  kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m31s                  node-controller  Node default-k8s-diff-port-124330 event: Registered Node default-k8s-diff-port-124330 in Controller
	  Normal   NodeReady                109s                   kubelet          Node default-k8s-diff-port-124330 status is now: NodeReady
	  Normal   Starting                 72s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 72s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)      kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)      kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)      kubelet          Node default-k8s-diff-port-124330 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                    node-controller  Node default-k8s-diff-port-124330 event: Registered Node default-k8s-diff-port-124330 in Controller
	
	
	==> dmesg <==
	[ +45.234984] overlayfs: idmapped layers are currently not supported
	[Nov21 14:41] overlayfs: idmapped layers are currently not supported
	[ +37.646493] overlayfs: idmapped layers are currently not supported
	[Nov21 14:42] overlayfs: idmapped layers are currently not supported
	[Nov21 14:44] overlayfs: idmapped layers are currently not supported
	[Nov21 14:45] overlayfs: idmapped layers are currently not supported
	[Nov21 14:47] overlayfs: idmapped layers are currently not supported
	[Nov21 14:48] overlayfs: idmapped layers are currently not supported
	[Nov21 14:49] overlayfs: idmapped layers are currently not supported
	[Nov21 14:51] overlayfs: idmapped layers are currently not supported
	[Nov21 14:54] overlayfs: idmapped layers are currently not supported
	[ +52.676525] overlayfs: idmapped layers are currently not supported
	[  +0.105529] overlayfs: idmapped layers are currently not supported
	[Nov21 14:55] overlayfs: idmapped layers are currently not supported
	[Nov21 14:56] overlayfs: idmapped layers are currently not supported
	[Nov21 14:57] overlayfs: idmapped layers are currently not supported
	[Nov21 14:58] overlayfs: idmapped layers are currently not supported
	[Nov21 14:59] overlayfs: idmapped layers are currently not supported
	[Nov21 15:00] overlayfs: idmapped layers are currently not supported
	[ +13.392744] overlayfs: idmapped layers are currently not supported
	[Nov21 15:01] overlayfs: idmapped layers are currently not supported
	[Nov21 15:02] overlayfs: idmapped layers are currently not supported
	[ +25.555443] overlayfs: idmapped layers are currently not supported
	[Nov21 15:03] overlayfs: idmapped layers are currently not supported
	[  +2.173955] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8812c413c9de68d93c0162764f45b3d55f29007bce2646ce2fb79c02a7766a43] <==
	{"level":"warn","ts":"2025-11-21T15:03:38.985019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.030822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.079352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.110732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.177012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.199999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.241767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.271366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.301013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.346325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.399355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.411022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.453724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.479399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.541462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.563066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.588653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.614341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.651222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.683263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.726519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.755989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.796464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.817925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T15:03:39.996169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36200","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:04:45 up  2:47,  0 user,  load average: 3.09, 3.81, 3.06
	Linux default-k8s-diff-port-124330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8542c4d9705bcfbb9ccfc9cee884439ff94461901253f047e84e29acf9b7621e] <==
	I1121 15:03:44.143605       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 15:03:44.155208       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 15:03:44.155361       1 main.go:148] setting mtu 1500 for CNI 
	I1121 15:03:44.155374       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 15:03:44.155385       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T15:03:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 15:03:44.430487       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 15:03:44.430505       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 15:03:44.430513       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 15:03:44.430826       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 15:04:14.430400       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 15:04:14.430514       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 15:04:14.431301       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 15:04:14.431344       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1121 15:04:15.930936       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 15:04:15.930966       1 metrics.go:72] Registering metrics
	I1121 15:04:15.931039       1 controller.go:711] "Syncing nftables rules"
	I1121 15:04:24.430530       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 15:04:24.430576       1 main.go:301] handling current node
	I1121 15:04:34.429722       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 15:04:34.429757       1 main.go:301] handling current node
	I1121 15:04:44.439461       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 15:04:44.439500       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f3450da3d6505714a2ddbd0849055e0c303889ab8fbf96ab66e5fb100167b3d0] <==
	I1121 15:03:41.859254       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 15:03:41.859260       1 cache.go:39] Caches are synced for autoregister controller
	I1121 15:03:41.867866       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 15:03:41.901036       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1121 15:03:41.982548       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1121 15:03:41.982575       1 policy_source.go:240] refreshing policies
	I1121 15:03:41.986521       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1121 15:03:42.017368       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 15:03:42.017409       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 15:03:42.040177       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 15:03:42.060993       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 15:03:42.062343       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 15:03:42.133774       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1121 15:03:42.217608       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 15:03:42.435903       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 15:03:43.241148       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 15:03:44.571453       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 15:03:44.919784       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 15:03:45.249863       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 15:03:45.421537       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 15:03:45.784575       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.91.220"}
	I1121 15:03:45.833167       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.134.110"}
	I1121 15:03:46.890213       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 15:03:47.222934       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 15:03:47.382390       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ee9ac53aba59fc1e496aea56983c3d0c392cff161ea0a9c80336aaf6a3bb18d1] <==
	I1121 15:03:46.846229       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 15:03:46.846305       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-124330"
	I1121 15:03:46.846360       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1121 15:03:46.851549       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 15:03:46.851587       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 15:03:46.853388       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 15:03:46.855617       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 15:03:46.857844       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 15:03:46.858792       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 15:03:46.858841       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 15:03:46.861994       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 15:03:46.880964       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 15:03:46.884509       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 15:03:46.884812       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 15:03:46.886940       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 15:03:46.888001       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 15:03:46.891738       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 15:03:46.893185       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 15:03:46.897349       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 15:03:46.899047       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 15:03:46.899124       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 15:03:46.915365       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:03:46.953803       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 15:03:46.953837       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 15:03:46.953846       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c883b96946c233f38d0c8644abe895f3d621a5ad142233e77120f6b5eda51757] <==
	I1121 15:03:45.742562       1 server_linux.go:53] "Using iptables proxy"
	I1121 15:03:46.194295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 15:03:46.298243       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 15:03:46.307959       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 15:03:46.308048       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 15:03:46.655652       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 15:03:46.655776       1 server_linux.go:132] "Using iptables Proxier"
	I1121 15:03:46.929603       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 15:03:46.930026       1 server.go:527] "Version info" version="v1.34.1"
	I1121 15:03:46.930254       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:03:46.932169       1 config.go:200] "Starting service config controller"
	I1121 15:03:46.932246       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 15:03:46.932316       1 config.go:106] "Starting endpoint slice config controller"
	I1121 15:03:46.932344       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 15:03:46.932450       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 15:03:46.932483       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 15:03:46.933426       1 config.go:309] "Starting node config controller"
	I1121 15:03:46.933510       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 15:03:46.933564       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 15:03:47.033178       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 15:03:47.033221       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 15:03:47.033266       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c627117ae55976c3bd9490f6441736eebc7000d2c50a16ac0fbd1824c9604beb] <==
	I1121 15:03:40.640093       1 serving.go:386] Generated self-signed cert in-memory
	I1121 15:03:47.219599       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 15:03:47.219719       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 15:03:47.250148       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 15:03:47.250193       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 15:03:47.250350       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:03:47.250361       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 15:03:47.250377       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:03:47.250385       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:03:47.251355       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 15:03:47.251775       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 15:03:47.351845       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 15:03:47.351985       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 15:03:47.352144       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 15:03:47 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:47.533009     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj5vd\" (UniqueName: \"kubernetes.io/projected/e8eeec4c-0209-4d3a-bf07-5706e2abe27e-kube-api-access-zj5vd\") pod \"kubernetes-dashboard-855c9754f9-8j9t6\" (UID: \"e8eeec4c-0209-4d3a-bf07-5706e2abe27e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8j9t6"
	Nov 21 15:03:47 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:47.634023     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1f883e74-021c-4514-a1b6-0497912dadd7-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-kzfq7\" (UID: \"1f883e74-021c-4514-a1b6-0497912dadd7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7"
	Nov 21 15:03:47 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:47.634096     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7gfh\" (UniqueName: \"kubernetes.io/projected/1f883e74-021c-4514-a1b6-0497912dadd7-kube-api-access-c7gfh\") pod \"dashboard-metrics-scraper-6ffb444bf9-kzfq7\" (UID: \"1f883e74-021c-4514-a1b6-0497912dadd7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7"
	Nov 21 15:03:47 default-k8s-diff-port-124330 kubelet[788]: W1121 15:03:47.850330     788 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fad72cd6bedb2b8b5b06cb1ac1d1e7cb44e3c9cb76f921aa132eb39f3c774818/crio-e0814e61a4311ff91969e04c10882f6cd782d563b8eca26fd1fe53f84644670a WatchSource:0}: Error finding container e0814e61a4311ff91969e04c10882f6cd782d563b8eca26fd1fe53f84644670a: Status 404 returned error can't find the container with id e0814e61a4311ff91969e04c10882f6cd782d563b8eca26fd1fe53f84644670a
	Nov 21 15:03:54 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:54.206742     788 scope.go:117] "RemoveContainer" containerID="79bb927e953c3dbd1c84fc6ed8d6dc4287bbff50ca50979787f1c3248354764e"
	Nov 21 15:03:55 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:55.211598     788 scope.go:117] "RemoveContainer" containerID="79bb927e953c3dbd1c84fc6ed8d6dc4287bbff50ca50979787f1c3248354764e"
	Nov 21 15:03:55 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:55.211905     788 scope.go:117] "RemoveContainer" containerID="20de9199729559ad0d9aba4b1ddb1571b04e053caa4e879499c52f0aee9e9d4e"
	Nov 21 15:03:55 default-k8s-diff-port-124330 kubelet[788]: E1121 15:03:55.212058     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kzfq7_kubernetes-dashboard(1f883e74-021c-4514-a1b6-0497912dadd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7" podUID="1f883e74-021c-4514-a1b6-0497912dadd7"
	Nov 21 15:03:56 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:56.216118     788 scope.go:117] "RemoveContainer" containerID="20de9199729559ad0d9aba4b1ddb1571b04e053caa4e879499c52f0aee9e9d4e"
	Nov 21 15:03:56 default-k8s-diff-port-124330 kubelet[788]: E1121 15:03:56.216268     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kzfq7_kubernetes-dashboard(1f883e74-021c-4514-a1b6-0497912dadd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7" podUID="1f883e74-021c-4514-a1b6-0497912dadd7"
	Nov 21 15:03:57 default-k8s-diff-port-124330 kubelet[788]: I1121 15:03:57.754141     788 scope.go:117] "RemoveContainer" containerID="20de9199729559ad0d9aba4b1ddb1571b04e053caa4e879499c52f0aee9e9d4e"
	Nov 21 15:03:57 default-k8s-diff-port-124330 kubelet[788]: E1121 15:03:57.754327     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kzfq7_kubernetes-dashboard(1f883e74-021c-4514-a1b6-0497912dadd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7" podUID="1f883e74-021c-4514-a1b6-0497912dadd7"
	Nov 21 15:04:10 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:10.733516     788 scope.go:117] "RemoveContainer" containerID="20de9199729559ad0d9aba4b1ddb1571b04e053caa4e879499c52f0aee9e9d4e"
	Nov 21 15:04:11 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:11.257356     788 scope.go:117] "RemoveContainer" containerID="20de9199729559ad0d9aba4b1ddb1571b04e053caa4e879499c52f0aee9e9d4e"
	Nov 21 15:04:11 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:11.257651     788 scope.go:117] "RemoveContainer" containerID="0df3c2682375bee0e258205c092887bac1830cecbcd0d95b1924532bfaa5484f"
	Nov 21 15:04:11 default-k8s-diff-port-124330 kubelet[788]: E1121 15:04:11.257806     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kzfq7_kubernetes-dashboard(1f883e74-021c-4514-a1b6-0497912dadd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7" podUID="1f883e74-021c-4514-a1b6-0497912dadd7"
	Nov 21 15:04:11 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:11.289547     788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8j9t6" podStartSLOduration=13.048951645 podStartE2EDuration="24.289529901s" podCreationTimestamp="2025-11-21 15:03:47 +0000 UTC" firstStartedPulling="2025-11-21 15:03:47.871784412 +0000 UTC m=+14.464648596" lastFinishedPulling="2025-11-21 15:03:59.112362676 +0000 UTC m=+25.705226852" observedRunningTime="2025-11-21 15:03:59.243188425 +0000 UTC m=+25.836052609" watchObservedRunningTime="2025-11-21 15:04:11.289529901 +0000 UTC m=+37.882394076"
	Nov 21 15:04:15 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:15.271384     788 scope.go:117] "RemoveContainer" containerID="5f0e5e46cc63025dfbd9f042185466d845b7c03f4a63e54afd5bb50b59c9f815"
	Nov 21 15:04:17 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:17.753914     788 scope.go:117] "RemoveContainer" containerID="0df3c2682375bee0e258205c092887bac1830cecbcd0d95b1924532bfaa5484f"
	Nov 21 15:04:17 default-k8s-diff-port-124330 kubelet[788]: E1121 15:04:17.754618     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kzfq7_kubernetes-dashboard(1f883e74-021c-4514-a1b6-0497912dadd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7" podUID="1f883e74-021c-4514-a1b6-0497912dadd7"
	Nov 21 15:04:29 default-k8s-diff-port-124330 kubelet[788]: I1121 15:04:29.735080     788 scope.go:117] "RemoveContainer" containerID="0df3c2682375bee0e258205c092887bac1830cecbcd0d95b1924532bfaa5484f"
	Nov 21 15:04:29 default-k8s-diff-port-124330 kubelet[788]: E1121 15:04:29.735412     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kzfq7_kubernetes-dashboard(1f883e74-021c-4514-a1b6-0497912dadd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kzfq7" podUID="1f883e74-021c-4514-a1b6-0497912dadd7"
	Nov 21 15:04:40 default-k8s-diff-port-124330 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 15:04:40 default-k8s-diff-port-124330 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 15:04:40 default-k8s-diff-port-124330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0328bfae2ad66c3fc1fcf7d24675c343d9b2c56f62faf1eb3ba8350ce1788d93] <==
	2025/11/21 15:03:59 Starting overwatch
	2025/11/21 15:03:59 Using namespace: kubernetes-dashboard
	2025/11/21 15:03:59 Using in-cluster config to connect to apiserver
	2025/11/21 15:03:59 Using secret token for csrf signing
	2025/11/21 15:03:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 15:03:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 15:03:59 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 15:03:59 Generating JWE encryption key
	2025/11/21 15:03:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 15:03:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 15:04:00 Initializing JWE encryption key from synchronized object
	2025/11/21 15:04:00 Creating in-cluster Sidecar client
	2025/11/21 15:04:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 15:04:00 Serving insecurely on HTTP port: 9090
	2025/11/21 15:04:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5f0e5e46cc63025dfbd9f042185466d845b7c03f4a63e54afd5bb50b59c9f815] <==
	I1121 15:03:45.119707       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 15:04:15.143485       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7b937ecab0a241d292f6754bcbd211657f52de9ee8071744759c18c71945d0db] <==
	W1121 15:04:15.400553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:18.855283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:23.115378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:26.713567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:29.775411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:32.797442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:32.802170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:04:32.802331       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 15:04:32.802500       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-124330_815719b2-3051-407b-8bdf-de3c0eb9d913!
	I1121 15:04:32.803592       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf0e76b3-7a61-453e-ad8b-291e224c4abe", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-124330_815719b2-3051-407b-8bdf-de3c0eb9d913 became leader
	W1121 15:04:32.808493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:32.814127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 15:04:32.903059       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-124330_815719b2-3051-407b-8bdf-de3c0eb9d913!
	W1121 15:04:34.816861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:34.823625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:36.827328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:36.832456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:38.839049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:38.851548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:40.855247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:40.863365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:42.866589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:42.871754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:44.874735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 15:04:44.881941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330: exit status 2 (379.165013ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-124330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.10s)
E1121 15:10:28.674529  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (261/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 15.4
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.15
9 TestDownloadOnly/v1.28.0/DeleteAll 0.27
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.25
12 TestDownloadOnly/v1.34.1/json-events 6.2
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 158.12
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 9.81
48 TestAddons/StoppedEnableDisable 12.42
49 TestCertOptions 37.55
50 TestCertExpiration 243.05
52 TestForceSystemdFlag 41.17
53 TestForceSystemdEnv 48.28
58 TestErrorSpam/setup 33.6
59 TestErrorSpam/start 0.77
60 TestErrorSpam/status 1.51
61 TestErrorSpam/pause 6.24
62 TestErrorSpam/unpause 5.48
63 TestErrorSpam/stop 1.51
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 80.65
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 37.84
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.47
75 TestFunctional/serial/CacheCmd/cache/add_local 1.11
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
80 TestFunctional/serial/CacheCmd/cache/delete 0.17
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 37.5
84 TestFunctional/serial/ComponentHealth 0.12
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.49
87 TestFunctional/serial/InvalidService 4.38
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 13.15
91 TestFunctional/parallel/DryRun 0.62
92 TestFunctional/parallel/InternationalLanguage 0.29
93 TestFunctional/parallel/StatusCmd 1.42
98 TestFunctional/parallel/AddonsCmd 0.23
99 TestFunctional/parallel/PersistentVolumeClaim 25.96
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 2.42
104 TestFunctional/parallel/FileSync 0.3
105 TestFunctional/parallel/CertSync 2.11
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
113 TestFunctional/parallel/License 0.33
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.49
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
129 TestFunctional/parallel/MountCmd/any-port 7.79
130 TestFunctional/parallel/MountCmd/specific-port 2.07
131 TestFunctional/parallel/MountCmd/VerifyCleanup 2.11
132 TestFunctional/parallel/ServiceCmd/List 0.58
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.65
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.06
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.89
144 TestFunctional/parallel/ImageCommands/Setup 0.67
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
155 TestFunctional/delete_echo-server_images 0.29
156 TestFunctional/delete_my-image_image 0.05
157 TestFunctional/delete_minikube_cached_images 0.04
162 TestMultiControlPlane/serial/StartCluster 203.31
163 TestMultiControlPlane/serial/DeployApp 6.68
164 TestMultiControlPlane/serial/PingHostFromPods 1.59
165 TestMultiControlPlane/serial/AddWorkerNode 61.71
166 TestMultiControlPlane/serial/NodeLabels 0.13
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.06
168 TestMultiControlPlane/serial/CopyFile 20.02
169 TestMultiControlPlane/serial/StopSecondaryNode 12.9
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
171 TestMultiControlPlane/serial/RestartSecondaryNode 26.3
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.26
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 121.04
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.55
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
176 TestMultiControlPlane/serial/StopCluster 36.25
177 TestMultiControlPlane/serial/RestartCluster 96.05
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.82
179 TestMultiControlPlane/serial/AddSecondaryNode 83.02
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
185 TestJSONOutput/start/Command 79.78
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.84
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 42.65
211 TestKicCustomNetwork/use_default_bridge_network 36.11
212 TestKicExistingNetwork 36.22
213 TestKicCustomSubnet 35.9
214 TestKicStaticIP 38.18
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 73.28
219 TestMountStart/serial/StartWithMountFirst 8.84
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 9.33
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 8.34
227 TestMountStart/serial/VerifyMountPostStop 0.29
230 TestMultiNode/serial/FreshStart2Nodes 136.79
231 TestMultiNode/serial/DeployApp2Nodes 4.81
232 TestMultiNode/serial/PingHostFrom2Pods 1
233 TestMultiNode/serial/AddNode 60.28
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.4
237 TestMultiNode/serial/StopNode 2.41
238 TestMultiNode/serial/StartAfterStop 8.64
239 TestMultiNode/serial/RestartKeepsNodes 76.1
240 TestMultiNode/serial/DeleteNode 5.76
241 TestMultiNode/serial/StopMultiNode 24.02
242 TestMultiNode/serial/RestartMultiNode 48.68
243 TestMultiNode/serial/ValidateNameConflict 36.77
248 TestPreload 184.1
250 TestScheduledStopUnix 110.75
253 TestInsufficientStorage 10.78
254 TestRunningBinaryUpgrade 69.54
256 TestKubernetesUpgrade 357.02
257 TestMissingContainerUpgrade 121.42
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 46.15
261 TestNoKubernetes/serial/StartWithStopK8s 8.02
262 TestNoKubernetes/serial/Start 10.02
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
265 TestNoKubernetes/serial/ProfileList 1.21
266 TestNoKubernetes/serial/Stop 1.37
267 TestNoKubernetes/serial/StartNoArgs 7.73
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
269 TestStoppedBinaryUpgrade/Setup 2.16
270 TestStoppedBinaryUpgrade/Upgrade 62.59
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.25
280 TestPause/serial/Start 82.19
281 TestPause/serial/SecondStartNoReconfiguration 31.91
290 TestNetworkPlugins/group/false 5.26
295 TestStartStop/group/old-k8s-version/serial/FirstStart 62.84
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.47
298 TestStartStop/group/old-k8s-version/serial/Stop 12.01
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
300 TestStartStop/group/old-k8s-version/serial/SecondStart 51.77
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.25
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.53
306 TestStartStop/group/embed-certs/serial/FirstStart 88.11
308 TestStartStop/group/no-preload/serial/FirstStart 74.13
309 TestStartStop/group/no-preload/serial/DeployApp 9.83
311 TestStartStop/group/no-preload/serial/Stop 12.15
312 TestStartStop/group/embed-certs/serial/DeployApp 9.41
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.3
315 TestStartStop/group/embed-certs/serial/Stop 12.2
316 TestStartStop/group/no-preload/serial/SecondStart 55.55
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
318 TestStartStop/group/embed-certs/serial/SecondStart 59.9
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.15
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
330 TestStartStop/group/newest-cni/serial/FirstStart 44.55
331 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/Stop 1.37
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
335 TestStartStop/group/newest-cni/serial/SecondStart 16.32
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.38
341 TestNetworkPlugins/group/auto/Start 88.9
343 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.16
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 63.66
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
348 TestNetworkPlugins/group/auto/KubeletFlags 0.3
349 TestNetworkPlugins/group/auto/NetCatPod 11.3
350 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
352 TestNetworkPlugins/group/auto/DNS 0.23
353 TestNetworkPlugins/group/auto/Localhost 0.21
354 TestNetworkPlugins/group/auto/HairPin 0.18
355 TestNetworkPlugins/group/kindnet/Start 87.98
356 TestNetworkPlugins/group/calico/Start 67.12
357 TestNetworkPlugins/group/kindnet/ControllerPod 6
358 TestNetworkPlugins/group/calico/ControllerPod 6.01
359 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
360 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
361 TestNetworkPlugins/group/calico/KubeletFlags 0.4
362 TestNetworkPlugins/group/calico/NetCatPod 11.4
363 TestNetworkPlugins/group/kindnet/DNS 0.17
364 TestNetworkPlugins/group/kindnet/Localhost 0.13
365 TestNetworkPlugins/group/kindnet/HairPin 0.15
366 TestNetworkPlugins/group/calico/DNS 0.17
367 TestNetworkPlugins/group/calico/Localhost 0.13
368 TestNetworkPlugins/group/calico/HairPin 0.14
369 TestNetworkPlugins/group/custom-flannel/Start 65.96
370 TestNetworkPlugins/group/enable-default-cni/Start 83.81
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.29
373 TestNetworkPlugins/group/custom-flannel/DNS 0.15
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.37
378 TestNetworkPlugins/group/flannel/Start 67.94
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
382 TestNetworkPlugins/group/bridge/Start 75.23
383 TestNetworkPlugins/group/flannel/ControllerPod 6
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
385 TestNetworkPlugins/group/flannel/NetCatPod 10.28
386 TestNetworkPlugins/group/flannel/DNS 0.16
387 TestNetworkPlugins/group/flannel/Localhost 0.14
388 TestNetworkPlugins/group/flannel/HairPin 0.14
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
390 TestNetworkPlugins/group/bridge/NetCatPod 11.41
391 TestNetworkPlugins/group/bridge/DNS 0.19
392 TestNetworkPlugins/group/bridge/Localhost 0.13
393 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (15.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-437513 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-437513 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.404084635s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (15.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1121 13:56:17.961303  291060 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1121 13:56:17.961389  291060 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-437513
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-437513: exit status 85 (153.906356ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-437513 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-437513 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 13:56:02
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 13:56:02.604267  291065 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:56:02.604494  291065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:02.604524  291065 out.go:374] Setting ErrFile to fd 2...
	I1121 13:56:02.604543  291065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:02.604859  291065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	W1121 13:56:02.605024  291065 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21847-289204/.minikube/config/config.json: open /home/jenkins/minikube-integration/21847-289204/.minikube/config/config.json: no such file or directory
	I1121 13:56:02.605481  291065 out.go:368] Setting JSON to true
	I1121 13:56:02.606395  291065 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5914,"bootTime":1763727448,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 13:56:02.606503  291065 start.go:143] virtualization:  
	I1121 13:56:02.610505  291065 out.go:99] [download-only-437513] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1121 13:56:02.610728  291065 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball: no such file or directory
	I1121 13:56:02.610822  291065 notify.go:221] Checking for updates...
	I1121 13:56:02.613745  291065 out.go:171] MINIKUBE_LOCATION=21847
	I1121 13:56:02.616882  291065 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 13:56:02.619817  291065 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 13:56:02.622813  291065 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 13:56:02.625752  291065 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1121 13:56:02.631634  291065 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 13:56:02.631982  291065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 13:56:02.665349  291065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 13:56:02.665468  291065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:02.721975  291065 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-21 13:56:02.712683279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 13:56:02.722087  291065 docker.go:319] overlay module found
	I1121 13:56:02.725245  291065 out.go:99] Using the docker driver based on user configuration
	I1121 13:56:02.725298  291065 start.go:309] selected driver: docker
	I1121 13:56:02.725306  291065 start.go:930] validating driver "docker" against <nil>
	I1121 13:56:02.725430  291065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:02.790917  291065 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-21 13:56:02.781613376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 13:56:02.791072  291065 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 13:56:02.791345  291065 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1121 13:56:02.791512  291065 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 13:56:02.794697  291065 out.go:171] Using Docker driver with root privileges
	I1121 13:56:02.797666  291065 cni.go:84] Creating CNI manager for ""
	I1121 13:56:02.797747  291065 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:56:02.797765  291065 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 13:56:02.797855  291065 start.go:353] cluster config:
	{Name:download-only-437513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-437513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 13:56:02.800844  291065 out.go:99] Starting "download-only-437513" primary control-plane node in "download-only-437513" cluster
	I1121 13:56:02.800875  291065 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 13:56:02.803786  291065 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1121 13:56:02.803845  291065 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1121 13:56:02.803944  291065 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 13:56:02.820404  291065 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:56:02.820593  291065 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1121 13:56:02.820699  291065 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:56:02.867115  291065 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1121 13:56:02.867139  291065 cache.go:65] Caching tarball of preloaded images
	I1121 13:56:02.867310  291065 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1121 13:56:02.870550  291065 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1121 13:56:02.870583  291065 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1121 13:56:02.964712  291065 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1121 13:56:02.964842  291065 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1121 13:56:08.439312  291065 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1121 13:56:08.439692  291065 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/download-only-437513/config.json ...
	I1121 13:56:08.439735  291065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/download-only-437513/config.json: {Name:mk4056bcef9481d98d792f4d2479db8aa360e730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:08.439963  291065 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1121 13:56:08.440132  291065 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-437513 host does not exist
	  To start a cluster, run: "minikube start -p download-only-437513"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-437513
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (6.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-946349 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-946349 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.195754861s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (6.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1121 13:56:24.829034  291060 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1121 13:56:24.829068  291060 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-946349
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-946349: exit status 85 (87.61442ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-437513 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-437513 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ delete  │ -p download-only-437513                                                                                                                                                   │ download-only-437513 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ start   │ -o=json --download-only -p download-only-946349 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-946349 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 13:56:18
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 13:56:18.682541  291269 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:56:18.682824  291269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:18.682857  291269 out.go:374] Setting ErrFile to fd 2...
	I1121 13:56:18.682877  291269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:18.683182  291269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 13:56:18.683680  291269 out.go:368] Setting JSON to true
	I1121 13:56:18.684608  291269 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5930,"bootTime":1763727448,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 13:56:18.684713  291269 start.go:143] virtualization:  
	I1121 13:56:18.723448  291269 out.go:99] [download-only-946349] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 13:56:18.723896  291269 notify.go:221] Checking for updates...
	I1121 13:56:18.757325  291269 out.go:171] MINIKUBE_LOCATION=21847
	I1121 13:56:18.787536  291269 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 13:56:18.818402  291269 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 13:56:18.852189  291269 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 13:56:18.883005  291269 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1121 13:56:18.964314  291269 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 13:56:18.964703  291269 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 13:56:18.988219  291269 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 13:56:18.988357  291269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:19.043935  291269 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-21 13:56:19.034456865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 13:56:19.044040  291269 docker.go:319] overlay module found
	I1121 13:56:19.091731  291269 out.go:99] Using the docker driver based on user configuration
	I1121 13:56:19.091781  291269 start.go:309] selected driver: docker
	I1121 13:56:19.091805  291269 start.go:930] validating driver "docker" against <nil>
	I1121 13:56:19.091917  291269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:19.159095  291269 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-21 13:56:19.149439986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 13:56:19.159259  291269 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 13:56:19.159562  291269 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1121 13:56:19.159748  291269 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 13:56:19.172248  291269 out.go:171] Using Docker driver with root privileges
	I1121 13:56:19.202934  291269 cni.go:84] Creating CNI manager for ""
	I1121 13:56:19.203010  291269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:56:19.203023  291269 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 13:56:19.203106  291269 start.go:353] cluster config:
	{Name:download-only-946349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-946349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 13:56:19.236402  291269 out.go:99] Starting "download-only-946349" primary control-plane node in "download-only-946349" cluster
	I1121 13:56:19.236435  291269 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 13:56:19.270507  291269 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1121 13:56:19.270567  291269 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:56:19.270743  291269 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 13:56:19.285760  291269 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:56:19.285902  291269 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1121 13:56:19.285922  291269 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1121 13:56:19.285927  291269 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1121 13:56:19.285934  291269 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1121 13:56:19.330107  291269 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 13:56:19.330132  291269 cache.go:65] Caching tarball of preloaded images
	I1121 13:56:19.330310  291269 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:56:19.364236  291269 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1121 13:56:19.364270  291269 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1121 13:56:19.452826  291269 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1121 13:56:19.452882  291269 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21847-289204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-946349 host does not exist
	  To start a cluster, run: "minikube start -p download-only-946349"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-946349
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1121 13:56:26.014790  291060 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-355307 --alsologtostderr --binary-mirror http://127.0.0.1:37741 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-355307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-355307
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-494116
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-494116: exit status 85 (69.99837ms)

                                                
                                                
-- stdout --
	* Profile "addons-494116" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-494116"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-494116
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-494116: exit status 85 (79.943998ms)

                                                
                                                
-- stdout --
	* Profile "addons-494116" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-494116"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (158.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-494116 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-494116 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m38.116642979s)
--- PASS: TestAddons/Setup (158.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-494116 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-494116 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.81s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-494116 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-494116 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3076dae2-a593-4499-8efe-3f9806b2d96d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3076dae2-a593-4499-8efe-3f9806b2d96d] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003371102s
addons_test.go:694: (dbg) Run:  kubectl --context addons-494116 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-494116 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-494116 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-494116 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-494116
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-494116: (12.134682452s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-494116
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-494116
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-494116
--- PASS: TestAddons/StoppedEnableDisable (12.42s)

                                                
                                    
x
+
TestCertOptions (37.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-605096 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-605096 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.653641918s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-605096 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-605096 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-605096 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-605096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-605096
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-605096: (2.125191414s)
--- PASS: TestCertOptions (37.55s)

                                                
                                    
x
+
TestCertExpiration (243.05s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-304879 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-304879 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.314214184s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-304879 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-304879 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.080710033s)
helpers_test.go:175: Cleaning up "cert-expiration-304879" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-304879
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-304879: (2.652347841s)
--- PASS: TestCertExpiration (243.05s)

                                                
                                    
x
+
TestForceSystemdFlag (41.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-332060 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1121 14:53:48.673116  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-332060 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.251296573s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-332060 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-332060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-332060
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-332060: (2.612765019s)
--- PASS: TestForceSystemdFlag (41.17s)

                                                
                                    
x
+
TestForceSystemdEnv (48.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-360486 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-360486 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.441740428s)
helpers_test.go:175: Cleaning up "force-systemd-env-360486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-360486
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-360486: (2.837245216s)
--- PASS: TestForceSystemdEnv (48.28s)

                                                
                                    
x
+
TestErrorSpam/setup (33.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-181641 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-181641 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-181641 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-181641 --driver=docker  --container-runtime=crio: (33.597010454s)
--- PASS: TestErrorSpam/setup (33.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 status
--- PASS: TestErrorSpam/status (1.51s)

                                                
                                    
x
+
TestErrorSpam/pause (6.24s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 pause: exit status 80 (2.340742998s)

                                                
                                                
-- stdout --
	* Pausing node nospam-181641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:03:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 pause: exit status 80 (2.336221465s)

                                                
                                                
-- stdout --
	* Pausing node nospam-181641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:03:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 pause: exit status 80 (1.563217272s)

                                                
                                                
-- stdout --
	* Pausing node nospam-181641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:03:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.24s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 unpause: exit status 80 (1.544575893s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-181641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:03:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 unpause: exit status 80 (2.005715493s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-181641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:03:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 unpause: exit status 80 (1.927754694s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-181641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:03:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.48s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 stop: (1.31062291s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181641 --log_dir /tmp/nospam-181641 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21847-289204/.minikube/files/etc/test/nested/copy/291060/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-939098 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1121 14:04:05.599510  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:05.610652  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:05.622248  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:05.644004  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:05.685911  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:05.768041  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:05.929590  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:06.251247  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:06.893283  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:08.174809  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:10.737749  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:15.859935  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:26.101390  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-939098 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.645937641s)
--- PASS: TestFunctional/serial/StartWithProxy (80.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1121 14:04:39.985261  291060 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-939098 --alsologtostderr -v=8
E1121 14:04:46.582725  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-939098 --alsologtostderr -v=8: (37.842754023s)
functional_test.go:678: soft start took 37.843239694s for "functional-939098" cluster.
I1121 14:05:17.828311  291060 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (37.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-939098 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-939098 cache add registry.k8s.io/pause:3.1: (1.171724485s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-939098 cache add registry.k8s.io/pause:3.3: (1.180655772s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-939098 cache add registry.k8s.io/pause:latest: (1.118950204s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-939098 /tmp/TestFunctionalserialCacheCmdcacheadd_local3114561733/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 cache add minikube-local-cache-test:functional-939098
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 cache delete minikube-local-cache-test:functional-939098
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-939098
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.442254ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 kubectl -- --context functional-939098 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-939098 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-939098 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1121 14:05:27.544120  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-939098 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.49961372s)
functional_test.go:776: restart took 37.499713357s for "functional-939098" cluster.
I1121 14:06:02.892473  291060 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (37.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-939098 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-939098 logs: (1.463309815s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 logs --file /tmp/TestFunctionalserialLogsFileCmd1227188863/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-939098 logs --file /tmp/TestFunctionalserialLogsFileCmd1227188863/001/logs.txt: (1.485459832s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-939098 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-939098
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-939098: exit status 115 (375.058262ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31839 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-939098 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 config get cpus: exit status 14 (82.06123ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 config get cpus: exit status 14 (92.151211ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-939098 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-939098 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 317581: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.15s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-939098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-939098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (215.688888ms)

                                                
                                                
-- stdout --
	* [functional-939098] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:16:40.730830  317013 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:16:40.730967  317013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:16:40.730979  317013 out.go:374] Setting ErrFile to fd 2...
	I1121 14:16:40.730984  317013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:16:40.731250  317013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:16:40.731665  317013 out.go:368] Setting JSON to false
	I1121 14:16:40.732694  317013 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7153,"bootTime":1763727448,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 14:16:40.732766  317013 start.go:143] virtualization:  
	I1121 14:16:40.735888  317013 out.go:179] * [functional-939098] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:16:40.739634  317013 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:16:40.739768  317013 notify.go:221] Checking for updates...
	I1121 14:16:40.745535  317013 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:16:40.748609  317013 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:16:40.751490  317013 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 14:16:40.754382  317013 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:16:40.757247  317013 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:16:40.760710  317013 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:16:40.761357  317013 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:16:40.790020  317013 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:16:40.790179  317013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:16:40.868150  317013 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:16:40.846700788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:16:40.868265  317013 docker.go:319] overlay module found
	I1121 14:16:40.871471  317013 out.go:179] * Using the docker driver based on existing profile
	I1121 14:16:40.874367  317013 start.go:309] selected driver: docker
	I1121 14:16:40.874385  317013 start.go:930] validating driver "docker" against &{Name:functional-939098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-939098 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:16:40.874584  317013 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:16:40.877970  317013 out.go:203] 
	W1121 14:16:40.880923  317013 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1121 14:16:40.883884  317013 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-939098 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-939098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-939098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (290.02964ms)

                                                
                                                
-- stdout --
	* [functional-939098] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:16:40.473225  316934 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:16:40.473438  316934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:16:40.473445  316934 out.go:374] Setting ErrFile to fd 2...
	I1121 14:16:40.473449  316934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:16:40.473812  316934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:16:40.474187  316934 out.go:368] Setting JSON to false
	I1121 14:16:40.475069  316934 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7152,"bootTime":1763727448,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 14:16:40.475143  316934 start.go:143] virtualization:  
	I1121 14:16:40.478805  316934 out.go:179] * [functional-939098] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1121 14:16:40.482055  316934 notify.go:221] Checking for updates...
	I1121 14:16:40.483387  316934 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:16:40.487723  316934 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:16:40.490655  316934 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:16:40.493571  316934 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 14:16:40.497385  316934 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:16:40.500531  316934 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:16:40.504356  316934 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:16:40.505569  316934 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:16:40.543336  316934 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:16:40.543659  316934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:16:40.640305  316934 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:16:40.628322851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:16:40.640467  316934 docker.go:319] overlay module found
	I1121 14:16:40.652465  316934 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1121 14:16:40.655976  316934 start.go:309] selected driver: docker
	I1121 14:16:40.655998  316934 start.go:930] validating driver "docker" against &{Name:functional-939098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-939098 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:16:40.656611  316934 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:16:40.661263  316934 out.go:203] 
	W1121 14:16:40.665025  316934 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1121 14:16:40.667970  316934 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [dbf5b44f-07ff-4eda-aec4-255e9e4b3762] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003837599s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-939098 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-939098 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-939098 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-939098 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2b33fa56-44cb-4f5e-a25a-267b58219984] Pending
helpers_test.go:352: "sp-pod" [2b33fa56-44cb-4f5e-a25a-267b58219984] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2b33fa56-44cb-4f5e-a25a-267b58219984] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003928319s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-939098 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-939098 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-939098 delete -f testdata/storage-provisioner/pod.yaml: (1.000311287s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-939098 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [de84ed6d-aa84-4d7a-8378-5a36ec1be7d2] Pending
helpers_test.go:352: "sp-pod" [de84ed6d-aa84-4d7a-8378-5a36ec1be7d2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003831797s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-939098 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh -n functional-939098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 cp functional-939098:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2222964975/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh -n functional-939098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh -n functional-939098 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/291060/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "sudo cat /etc/test/nested/copy/291060/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/291060.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "sudo cat /etc/ssl/certs/291060.pem"
2025/11/21 14:16:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/291060.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "sudo cat /usr/share/ca-certificates/291060.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2910602.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "sudo cat /etc/ssl/certs/2910602.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2910602.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "sudo cat /usr/share/ca-certificates/2910602.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-939098 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 ssh "sudo systemctl is-active docker": exit status 1 (343.450776ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 ssh "sudo systemctl is-active containerd": exit status 1 (364.32137ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-939098 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-939098 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-939098 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 313481: os: process already finished
helpers_test.go:519: unable to terminate pid 313271: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-939098 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-939098 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-939098 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [1153f3fd-2409-4a79-b89c-729fc1865795] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [1153f3fd-2409-4a79-b89c-729fc1865795] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.002793387s
I1121 14:06:21.716774  291060 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-939098 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.160.60 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-939098 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "365.770831ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.946738ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "350.976802ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "58.000126ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-939098 /tmp/TestFunctionalparallelMountCmdany-port2313368704/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763734586991983138" to /tmp/TestFunctionalparallelMountCmdany-port2313368704/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763734586991983138" to /tmp/TestFunctionalparallelMountCmdany-port2313368704/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763734586991983138" to /tmp/TestFunctionalparallelMountCmdany-port2313368704/001/test-1763734586991983138
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (367.554489ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 14:16:27.359840  291060 retry.go:31] will retry after 316.659737ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 21 14:16 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 21 14:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 21 14:16 test-1763734586991983138
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh cat /mount-9p/test-1763734586991983138
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-939098 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [f06bf4bc-f764-4ab6-bb3c-719f8dd34276] Pending
helpers_test.go:352: "busybox-mount" [f06bf4bc-f764-4ab6-bb3c-719f8dd34276] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [f06bf4bc-f764-4ab6-bb3c-719f8dd34276] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [f06bf4bc-f764-4ab6-bb3c-719f8dd34276] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004012254s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-939098 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-939098 /tmp/TestFunctionalparallelMountCmdany-port2313368704/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-939098 /tmp/TestFunctionalparallelMountCmdspecific-port2145847530/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (391.430677ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 14:16:35.172702  291060 retry.go:31] will retry after 610.752679ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-939098 /tmp/TestFunctionalparallelMountCmdspecific-port2145847530/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 ssh "sudo umount -f /mount-9p": exit status 1 (270.539736ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-939098 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-939098 /tmp/TestFunctionalparallelMountCmdspecific-port2145847530/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-939098 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2273029158/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-939098 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2273029158/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-939098 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2273029158/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 ssh "findmnt -T" /mount1: exit status 1 (602.849672ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 14:16:37.462622  291060 retry.go:31] will retry after 599.723431ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-939098 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-939098 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2273029158/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-939098 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2273029158/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-939098 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2273029158/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 service list -o json
functional_test.go:1504: Took "652.449209ms" to run "out/minikube-linux-arm64 -p functional-939098 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-939098 version -o=json --components: (1.060172834s)
--- PASS: TestFunctional/parallel/Version/components (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-939098 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-939098 image ls --format short --alsologtostderr:
I1121 14:16:55.653180  319271 out.go:360] Setting OutFile to fd 1 ...
I1121 14:16:55.653385  319271 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:16:55.653415  319271 out.go:374] Setting ErrFile to fd 2...
I1121 14:16:55.653435  319271 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:16:55.653776  319271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
I1121 14:16:55.654506  319271 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:16:55.654664  319271 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:16:55.655152  319271 cli_runner.go:164] Run: docker container inspect functional-939098 --format={{.State.Status}}
I1121 14:16:55.672673  319271 ssh_runner.go:195] Run: systemctl --version
I1121 14:16:55.672725  319271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
I1121 14:16:55.717072  319271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
I1121 14:16:55.823030  319271 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-939098 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-939098 image ls --format table --alsologtostderr:
I1121 14:16:56.888791  319669 out.go:360] Setting OutFile to fd 1 ...
I1121 14:16:56.889089  319669 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:16:56.889109  319669 out.go:374] Setting ErrFile to fd 2...
I1121 14:16:56.889115  319669 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:16:56.889510  319669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
I1121 14:16:56.890622  319669 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:16:56.890825  319669 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:16:56.891506  319669 cli_runner.go:164] Run: docker container inspect functional-939098 --format={{.State.Status}}
I1121 14:16:56.911820  319669 ssh_runner.go:195] Run: systemctl --version
I1121 14:16:56.911918  319669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
I1121 14:16:56.931651  319669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
I1121 14:16:57.031166  319669 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-939098 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1
b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["regis
try.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec9697
6a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb2
4b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef313640
21f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-939098 image ls --format json --alsologtostderr:
I1121 14:16:56.635696  319602 out.go:360] Setting OutFile to fd 1 ...
I1121 14:16:56.635886  319602 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:16:56.635921  319602 out.go:374] Setting ErrFile to fd 2...
I1121 14:16:56.635943  319602 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:16:56.636671  319602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
I1121 14:16:56.637331  319602 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:16:56.637447  319602 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:16:56.637885  319602 cli_runner.go:164] Run: docker container inspect functional-939098 --format={{.State.Status}}
I1121 14:16:56.659162  319602 ssh_runner.go:195] Run: systemctl --version
I1121 14:16:56.659217  319602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
I1121 14:16:56.681841  319602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
I1121 14:16:56.791338  319602 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-939098 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-939098 image ls --format yaml --alsologtostderr:
I1121 14:16:56.362488  319524 out.go:360] Setting OutFile to fd 1 ...
I1121 14:16:56.362717  319524 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:16:56.362732  319524 out.go:374] Setting ErrFile to fd 2...
I1121 14:16:56.362738  319524 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:16:56.363105  319524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
I1121 14:16:56.363919  319524 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:16:56.364323  319524 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:16:56.364853  319524 cli_runner.go:164] Run: docker container inspect functional-939098 --format={{.State.Status}}
I1121 14:16:56.387032  319524 ssh_runner.go:195] Run: systemctl --version
I1121 14:16:56.387087  319524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
I1121 14:16:56.407608  319524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
I1121 14:16:56.515606  319524 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-939098 ssh pgrep buildkitd: exit status 1 (332.991003ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image build -t localhost/my-image:functional-939098 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-939098 image build -t localhost/my-image:functional-939098 testdata/build --alsologtostderr: (3.316552914s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-939098 image build -t localhost/my-image:functional-939098 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 84816909c6d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-939098
--> bfea79e86dd
Successfully tagged localhost/my-image:functional-939098
bfea79e86ddea33d4762abf8600cd7af216a963cccd4b976a371b18223bdb68c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-939098 image build -t localhost/my-image:functional-939098 testdata/build --alsologtostderr:
I1121 14:16:56.491946  319564 out.go:360] Setting OutFile to fd 1 ...
I1121 14:16:56.492837  319564 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:16:56.492876  319564 out.go:374] Setting ErrFile to fd 2...
I1121 14:16:56.492898  319564 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:16:56.493189  319564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
I1121 14:16:56.493900  319564 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:16:56.494623  319564 config.go:182] Loaded profile config "functional-939098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:16:56.495119  319564 cli_runner.go:164] Run: docker container inspect functional-939098 --format={{.State.Status}}
I1121 14:16:56.518576  319564 ssh_runner.go:195] Run: systemctl --version
I1121 14:16:56.518624  319564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-939098
I1121 14:16:56.537972  319564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/functional-939098/id_rsa Username:docker}
I1121 14:16:56.659159  319564 build_images.go:162] Building image from path: /tmp/build.4286923412.tar
I1121 14:16:56.659237  319564 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1121 14:16:56.667132  319564 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4286923412.tar
I1121 14:16:56.672271  319564 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4286923412.tar: stat -c "%s %y" /var/lib/minikube/build/build.4286923412.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4286923412.tar': No such file or directory
I1121 14:16:56.672300  319564 ssh_runner.go:362] scp /tmp/build.4286923412.tar --> /var/lib/minikube/build/build.4286923412.tar (3072 bytes)
I1121 14:16:56.692359  319564 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4286923412
I1121 14:16:56.701013  319564 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4286923412 -xf /var/lib/minikube/build/build.4286923412.tar
I1121 14:16:56.712070  319564 crio.go:315] Building image: /var/lib/minikube/build/build.4286923412
I1121 14:16:56.712144  319564 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-939098 /var/lib/minikube/build/build.4286923412 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1121 14:16:59.716696  319564 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-939098 /var/lib/minikube/build/build.4286923412 --cgroup-manager=cgroupfs: (3.004530164s)
I1121 14:16:59.716764  319564 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4286923412
I1121 14:16:59.724581  319564 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4286923412.tar
I1121 14:16:59.732811  319564 build_images.go:218] Built localhost/my-image:functional-939098 from /tmp/build.4286923412.tar
I1121 14:16:59.732843  319564 build_images.go:134] succeeded building to: functional-939098
I1121 14:16:59.732854  319564 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-939098
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image rm kicbase/echo-server:functional-939098 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-939098 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.29s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-939098
--- PASS: TestFunctional/delete_echo-server_images (0.29s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-939098
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-939098
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1121 14:19:05.600819  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m22.412782614s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (203.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- rollout status deployment/busybox
E1121 14:20:28.669231  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 kubectl -- rollout status deployment/busybox: (3.854903339s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-27bdh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-86ptp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-x6fv8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-27bdh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-86ptp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-x6fv8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-27bdh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-86ptp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-x6fv8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-27bdh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-27bdh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-86ptp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-86ptp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-x6fv8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 kubectl -- exec busybox-7b57f96db7-x6fv8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 node add --alsologtostderr -v 5
E1121 14:21:12.226320  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:12.232755  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:12.244332  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:12.265772  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:12.307259  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:12.388825  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:12.550276  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:12.872060  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:13.513587  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:14.795046  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:17.356932  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:22.478249  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:32.719786  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 node add --alsologtostderr -v 5: (1m0.67647927s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5: (1.033529325s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-725910 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.056247878s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 status --output json --alsologtostderr -v 5: (1.015283801s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp testdata/cp-test.txt ha-725910:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile732003243/001/cp-test_ha-725910.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910:/home/docker/cp-test.txt ha-725910-m02:/home/docker/cp-test_ha-725910_ha-725910-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m02 "sudo cat /home/docker/cp-test_ha-725910_ha-725910-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910:/home/docker/cp-test.txt ha-725910-m03:/home/docker/cp-test_ha-725910_ha-725910-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m03 "sudo cat /home/docker/cp-test_ha-725910_ha-725910-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910:/home/docker/cp-test.txt ha-725910-m04:/home/docker/cp-test_ha-725910_ha-725910-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m04 "sudo cat /home/docker/cp-test_ha-725910_ha-725910-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp testdata/cp-test.txt ha-725910-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile732003243/001/cp-test_ha-725910-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910-m02:/home/docker/cp-test.txt ha-725910:/home/docker/cp-test_ha-725910-m02_ha-725910.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910 "sudo cat /home/docker/cp-test_ha-725910-m02_ha-725910.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910-m02:/home/docker/cp-test.txt ha-725910-m03:/home/docker/cp-test_ha-725910-m02_ha-725910-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m03 "sudo cat /home/docker/cp-test_ha-725910-m02_ha-725910-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910-m02:/home/docker/cp-test.txt ha-725910-m04:/home/docker/cp-test_ha-725910-m02_ha-725910-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m04 "sudo cat /home/docker/cp-test_ha-725910-m02_ha-725910-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp testdata/cp-test.txt ha-725910-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile732003243/001/cp-test_ha-725910-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910-m03:/home/docker/cp-test.txt ha-725910:/home/docker/cp-test_ha-725910-m03_ha-725910.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910 "sudo cat /home/docker/cp-test_ha-725910-m03_ha-725910.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910-m03:/home/docker/cp-test.txt ha-725910-m02:/home/docker/cp-test_ha-725910-m03_ha-725910-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m02 "sudo cat /home/docker/cp-test_ha-725910-m03_ha-725910-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910-m03:/home/docker/cp-test.txt ha-725910-m04:/home/docker/cp-test_ha-725910-m03_ha-725910-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m04 "sudo cat /home/docker/cp-test_ha-725910-m03_ha-725910-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp testdata/cp-test.txt ha-725910-m04:/home/docker/cp-test.txt
E1121 14:21:53.201733  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile732003243/001/cp-test_ha-725910-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910-m04:/home/docker/cp-test.txt ha-725910:/home/docker/cp-test_ha-725910-m04_ha-725910.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910 "sudo cat /home/docker/cp-test_ha-725910-m04_ha-725910.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910-m04:/home/docker/cp-test.txt ha-725910-m02:/home/docker/cp-test_ha-725910-m04_ha-725910-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m02 "sudo cat /home/docker/cp-test_ha-725910-m04_ha-725910-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 cp ha-725910-m04:/home/docker/cp-test.txt ha-725910-m03:/home/docker/cp-test_ha-725910-m04_ha-725910-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 ssh -n ha-725910-m03 "sudo cat /home/docker/cp-test_ha-725910-m04_ha-725910-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 node stop m02 --alsologtostderr -v 5: (12.105830697s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5: exit status 7 (790.182183ms)

                                                
                                                
-- stdout --
	ha-725910
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-725910-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-725910-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-725910-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:22:10.065238  334501 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:22:10.065456  334501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:22:10.065485  334501 out.go:374] Setting ErrFile to fd 2...
	I1121 14:22:10.065784  334501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:22:10.066430  334501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:22:10.066803  334501 out.go:368] Setting JSON to false
	I1121 14:22:10.066920  334501 mustload.go:66] Loading cluster: ha-725910
	I1121 14:22:10.067091  334501 notify.go:221] Checking for updates...
	I1121 14:22:10.067500  334501 config.go:182] Loaded profile config "ha-725910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:22:10.067575  334501 status.go:174] checking status of ha-725910 ...
	I1121 14:22:10.068275  334501 cli_runner.go:164] Run: docker container inspect ha-725910 --format={{.State.Status}}
	I1121 14:22:10.090891  334501 status.go:371] ha-725910 host status = "Running" (err=<nil>)
	I1121 14:22:10.090922  334501 host.go:66] Checking if "ha-725910" exists ...
	I1121 14:22:10.091281  334501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-725910
	I1121 14:22:10.126067  334501 host.go:66] Checking if "ha-725910" exists ...
	I1121 14:22:10.126398  334501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:22:10.126452  334501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-725910
	I1121 14:22:10.145924  334501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/ha-725910/id_rsa Username:docker}
	I1121 14:22:10.250170  334501 ssh_runner.go:195] Run: systemctl --version
	I1121 14:22:10.256853  334501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:22:10.270799  334501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:22:10.325403  334501 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-21 14:22:10.315377754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:22:10.325927  334501 kubeconfig.go:125] found "ha-725910" server: "https://192.168.49.254:8443"
	I1121 14:22:10.325968  334501 api_server.go:166] Checking apiserver status ...
	I1121 14:22:10.326015  334501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:22:10.338381  334501 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1257/cgroup
	I1121 14:22:10.347240  334501 api_server.go:182] apiserver freezer: "11:freezer:/docker/561bda64fa2ee1bbafb0b6fb9c2cb134deeb0d9acb403908c64cbdeb38fbd985/crio/crio-0e2bcea865e22e841d8fa57793dafbdc9ec06e9bd4c2fce7ce2cf211b8710bb7"
	I1121 14:22:10.347311  334501 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/561bda64fa2ee1bbafb0b6fb9c2cb134deeb0d9acb403908c64cbdeb38fbd985/crio/crio-0e2bcea865e22e841d8fa57793dafbdc9ec06e9bd4c2fce7ce2cf211b8710bb7/freezer.state
	I1121 14:22:10.355449  334501 api_server.go:204] freezer state: "THAWED"
	I1121 14:22:10.355478  334501 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1121 14:22:10.366140  334501 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1121 14:22:10.366170  334501 status.go:463] ha-725910 apiserver status = Running (err=<nil>)
	I1121 14:22:10.366182  334501 status.go:176] ha-725910 status: &{Name:ha-725910 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:22:10.366198  334501 status.go:174] checking status of ha-725910-m02 ...
	I1121 14:22:10.366519  334501 cli_runner.go:164] Run: docker container inspect ha-725910-m02 --format={{.State.Status}}
	I1121 14:22:10.387199  334501 status.go:371] ha-725910-m02 host status = "Stopped" (err=<nil>)
	I1121 14:22:10.387231  334501 status.go:384] host is not running, skipping remaining checks
	I1121 14:22:10.387243  334501 status.go:176] ha-725910-m02 status: &{Name:ha-725910-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:22:10.387263  334501 status.go:174] checking status of ha-725910-m03 ...
	I1121 14:22:10.387581  334501 cli_runner.go:164] Run: docker container inspect ha-725910-m03 --format={{.State.Status}}
	I1121 14:22:10.407498  334501 status.go:371] ha-725910-m03 host status = "Running" (err=<nil>)
	I1121 14:22:10.407528  334501 host.go:66] Checking if "ha-725910-m03" exists ...
	I1121 14:22:10.407912  334501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-725910-m03
	I1121 14:22:10.434212  334501 host.go:66] Checking if "ha-725910-m03" exists ...
	I1121 14:22:10.434558  334501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:22:10.434601  334501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-725910-m03
	I1121 14:22:10.452489  334501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/ha-725910-m03/id_rsa Username:docker}
	I1121 14:22:10.557930  334501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:22:10.571933  334501 kubeconfig.go:125] found "ha-725910" server: "https://192.168.49.254:8443"
	I1121 14:22:10.571973  334501 api_server.go:166] Checking apiserver status ...
	I1121 14:22:10.572019  334501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:22:10.583468  334501 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	I1121 14:22:10.594510  334501 api_server.go:182] apiserver freezer: "11:freezer:/docker/070d8749f78c1766762ea2592fc51df956ceac13bdd65c0792305a56bba6f0f0/crio/crio-ba9cf6c483f75cfec305beb11ccb9d3e25090166c6a88aa39e9de6807dca128f"
	I1121 14:22:10.594590  334501 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/070d8749f78c1766762ea2592fc51df956ceac13bdd65c0792305a56bba6f0f0/crio/crio-ba9cf6c483f75cfec305beb11ccb9d3e25090166c6a88aa39e9de6807dca128f/freezer.state
	I1121 14:22:10.602577  334501 api_server.go:204] freezer state: "THAWED"
	I1121 14:22:10.602608  334501 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1121 14:22:10.610935  334501 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1121 14:22:10.610963  334501 status.go:463] ha-725910-m03 apiserver status = Running (err=<nil>)
	I1121 14:22:10.610972  334501 status.go:176] ha-725910-m03 status: &{Name:ha-725910-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:22:10.610988  334501 status.go:174] checking status of ha-725910-m04 ...
	I1121 14:22:10.611301  334501 cli_runner.go:164] Run: docker container inspect ha-725910-m04 --format={{.State.Status}}
	I1121 14:22:10.629700  334501 status.go:371] ha-725910-m04 host status = "Running" (err=<nil>)
	I1121 14:22:10.629725  334501 host.go:66] Checking if "ha-725910-m04" exists ...
	I1121 14:22:10.630101  334501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-725910-m04
	I1121 14:22:10.649561  334501 host.go:66] Checking if "ha-725910-m04" exists ...
	I1121 14:22:10.649879  334501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:22:10.649924  334501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-725910-m04
	I1121 14:22:10.671781  334501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/ha-725910-m04/id_rsa Username:docker}
	I1121 14:22:10.774056  334501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:22:10.788361  334501 status.go:176] ha-725910-m04 status: &{Name:ha-725910-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (26.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 node start m02 --alsologtostderr -v 5
E1121 14:22:34.163467  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 node start m02 --alsologtostderr -v 5: (24.898218571s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5: (1.261038303s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (26.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.2631644s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (121.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 stop --alsologtostderr -v 5: (27.146916542s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 start --wait true --alsologtostderr -v 5
E1121 14:23:56.085133  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:24:05.599903  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 start --wait true --alsologtostderr -v 5: (1m33.688322386s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (121.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 node delete m03 --alsologtostderr -v 5: (10.562235736s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 stop --alsologtostderr -v 5: (36.129430152s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5: exit status 7 (117.090562ms)

                                                
                                                
-- stdout --
	ha-725910
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-725910-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-725910-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:25:28.707775  346003 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:25:28.707891  346003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:25:28.707902  346003 out.go:374] Setting ErrFile to fd 2...
	I1121 14:25:28.707907  346003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:25:28.708276  346003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:25:28.708519  346003 out.go:368] Setting JSON to false
	I1121 14:25:28.708546  346003 mustload.go:66] Loading cluster: ha-725910
	I1121 14:25:28.709268  346003 config.go:182] Loaded profile config "ha-725910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:25:28.709294  346003 status.go:174] checking status of ha-725910 ...
	I1121 14:25:28.709766  346003 notify.go:221] Checking for updates...
	I1121 14:25:28.710332  346003 cli_runner.go:164] Run: docker container inspect ha-725910 --format={{.State.Status}}
	I1121 14:25:28.730429  346003 status.go:371] ha-725910 host status = "Stopped" (err=<nil>)
	I1121 14:25:28.730448  346003 status.go:384] host is not running, skipping remaining checks
	I1121 14:25:28.730454  346003 status.go:176] ha-725910 status: &{Name:ha-725910 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:25:28.730488  346003 status.go:174] checking status of ha-725910-m02 ...
	I1121 14:25:28.730799  346003 cli_runner.go:164] Run: docker container inspect ha-725910-m02 --format={{.State.Status}}
	I1121 14:25:28.751364  346003 status.go:371] ha-725910-m02 host status = "Stopped" (err=<nil>)
	I1121 14:25:28.751393  346003 status.go:384] host is not running, skipping remaining checks
	I1121 14:25:28.751405  346003 status.go:176] ha-725910-m02 status: &{Name:ha-725910-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:25:28.751424  346003 status.go:174] checking status of ha-725910-m04 ...
	I1121 14:25:28.751715  346003 cli_runner.go:164] Run: docker container inspect ha-725910-m04 --format={{.State.Status}}
	I1121 14:25:28.774763  346003 status.go:371] ha-725910-m04 host status = "Stopped" (err=<nil>)
	I1121 14:25:28.774783  346003 status.go:384] host is not running, skipping remaining checks
	I1121 14:25:28.774790  346003 status.go:176] ha-725910-m04 status: &{Name:ha-725910-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (96.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1121 14:26:12.226647  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:26:39.928167  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m35.072509775s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (96.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (83.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 node add --control-plane --alsologtostderr -v 5: (1m21.92988451s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-725910 status --alsologtostderr -v 5: (1.08499419s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (83.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.054479628s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-636314 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1121 14:29:05.600590  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-636314 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.780812689s)
--- PASS: TestJSONOutput/start/Command (79.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-636314 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-636314 --output=json --user=testUser: (5.838610892s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-812215 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-812215 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (97.489325ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ec40c952-7401-435a-adc3-a9dbee8863af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-812215] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9b2858fc-d070-401d-bd20-872186005e8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21847"}}
	{"specversion":"1.0","id":"02cce65a-f435-43f4-bc03-a9aceba8ceed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b7364d90-2fa2-4194-8a42-286e2cd3853c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig"}}
	{"specversion":"1.0","id":"864270c1-9da6-4c7f-b125-e8e41faf9273","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube"}}
	{"specversion":"1.0","id":"2e9ff502-7d19-4c1c-a04a-31ef3c9a0b84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"98f05013-16ba-4f2e-aa36-7b164c3b8209","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ee2dba88-6cb0-4d25-8f08-65521c0a4496","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-812215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-812215
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-143575 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-143575 --network=: (40.369061146s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-143575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-143575
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-143575: (2.262798698s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.65s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-491616 --network=bridge
E1121 14:31:12.229493  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-491616 --network=bridge: (33.841696171s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-491616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-491616
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-491616: (2.246960201s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.11s)

                                                
                                    
x
+
TestKicExistingNetwork (36.22s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1121 14:31:32.236373  291060 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1121 14:31:32.252399  291060 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1121 14:31:32.253346  291060 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1121 14:31:32.253385  291060 cli_runner.go:164] Run: docker network inspect existing-network
W1121 14:31:32.269241  291060 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1121 14:31:32.269273  291060 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1121 14:31:32.269292  291060 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1121 14:31:32.269423  291060 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1121 14:31:32.288310  291060 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-82d3b8bc8a36 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:46:f3:82:e8:95} reservation:<nil>}
I1121 14:31:32.288758  291060 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d9f430}
I1121 14:31:32.288808  291060 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1121 14:31:32.288861  291060 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1121 14:31:32.349476  291060 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-502613 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-502613 --network=existing-network: (33.989729411s)
helpers_test.go:175: Cleaning up "existing-network-502613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-502613
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-502613: (2.08343096s)
I1121 14:32:08.439229  291060 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.22s)

                                                
                                    
x
+
TestKicCustomSubnet (35.9s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-523971 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-523971 --subnet=192.168.60.0/24: (33.653535049s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-523971 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-523971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-523971
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-523971: (2.216206704s)
--- PASS: TestKicCustomSubnet (35.90s)

                                                
                                    
x
+
TestKicStaticIP (38.18s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-483468 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-483468 --static-ip=192.168.200.200: (35.826156892s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-483468 ip
helpers_test.go:175: Cleaning up "static-ip-483468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-483468
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-483468: (2.164866865s)
--- PASS: TestKicStaticIP (38.18s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-371705 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-371705 --driver=docker  --container-runtime=crio: (32.721216939s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-374175 --driver=docker  --container-runtime=crio
E1121 14:34:05.600800  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-374175 --driver=docker  --container-runtime=crio: (34.889922181s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-371705
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-374175
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-374175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-374175
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-374175: (2.112664519s)
helpers_test.go:175: Cleaning up "first-371705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-371705
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-371705: (2.053601481s)
--- PASS: TestMinikubeProfile (73.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-799519 --memory=3072 --mount-string /tmp/TestMountStartserial1035395591/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-799519 --memory=3072 --mount-string /tmp/TestMountStartserial1035395591/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.840678155s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-799519 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-801472 --memory=3072 --mount-string /tmp/TestMountStartserial1035395591/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-801472 --memory=3072 --mount-string /tmp/TestMountStartserial1035395591/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.330481586s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-801472 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-799519 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-799519 --alsologtostderr -v=5: (1.726755081s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-801472 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-801472
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-801472: (1.290668008s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.34s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-801472
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-801472: (7.341570275s)
--- PASS: TestMountStart/serial/RestartStopped (8.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-801472 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346797 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1121 14:36:12.226599  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:37:08.670916  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-346797 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m16.244473458s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-346797 -- rollout status deployment/busybox: (3.032047741s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- exec busybox-7b57f96db7-s528t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- exec busybox-7b57f96db7-vs2lz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- exec busybox-7b57f96db7-s528t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- exec busybox-7b57f96db7-vs2lz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- exec busybox-7b57f96db7-s528t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- exec busybox-7b57f96db7-vs2lz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- exec busybox-7b57f96db7-s528t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- exec busybox-7b57f96db7-s528t -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- exec busybox-7b57f96db7-vs2lz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346797 -- exec busybox-7b57f96db7-vs2lz -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (60.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-346797 -v=5 --alsologtostderr
E1121 14:37:35.291684  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-346797 -v=5 --alsologtostderr: (59.579469919s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (60.28s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-346797 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 cp testdata/cp-test.txt multinode-346797:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 cp multinode-346797:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2866239475/001/cp-test_multinode-346797.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 cp multinode-346797:/home/docker/cp-test.txt multinode-346797-m02:/home/docker/cp-test_multinode-346797_multinode-346797-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797-m02 "sudo cat /home/docker/cp-test_multinode-346797_multinode-346797-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 cp multinode-346797:/home/docker/cp-test.txt multinode-346797-m03:/home/docker/cp-test_multinode-346797_multinode-346797-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797-m03 "sudo cat /home/docker/cp-test_multinode-346797_multinode-346797-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 cp testdata/cp-test.txt multinode-346797-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 cp multinode-346797-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2866239475/001/cp-test_multinode-346797-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 cp multinode-346797-m02:/home/docker/cp-test.txt multinode-346797:/home/docker/cp-test_multinode-346797-m02_multinode-346797.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797 "sudo cat /home/docker/cp-test_multinode-346797-m02_multinode-346797.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 cp multinode-346797-m02:/home/docker/cp-test.txt multinode-346797-m03:/home/docker/cp-test_multinode-346797-m02_multinode-346797-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797-m03 "sudo cat /home/docker/cp-test_multinode-346797-m02_multinode-346797-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 cp testdata/cp-test.txt multinode-346797-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 cp multinode-346797-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2866239475/001/cp-test_multinode-346797-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 cp multinode-346797-m03:/home/docker/cp-test.txt multinode-346797:/home/docker/cp-test_multinode-346797-m03_multinode-346797.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797 "sudo cat /home/docker/cp-test_multinode-346797-m03_multinode-346797.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 cp multinode-346797-m03:/home/docker/cp-test.txt multinode-346797-m02:/home/docker/cp-test_multinode-346797-m03_multinode-346797-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 ssh -n multinode-346797-m02 "sudo cat /home/docker/cp-test_multinode-346797-m03_multinode-346797-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-346797 node stop m03: (1.340164502s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-346797 status: exit status 7 (538.98299ms)

                                                
                                                
-- stdout --
	multinode-346797
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-346797-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-346797-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-346797 status --alsologtostderr: exit status 7 (534.338607ms)

                                                
                                                
-- stdout --
	multinode-346797
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-346797-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-346797-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:38:44.397356  396793 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:38:44.397525  396793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:38:44.397542  396793 out.go:374] Setting ErrFile to fd 2...
	I1121 14:38:44.397547  396793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:38:44.398926  396793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:38:44.399181  396793 out.go:368] Setting JSON to false
	I1121 14:38:44.399233  396793 mustload.go:66] Loading cluster: multinode-346797
	I1121 14:38:44.399332  396793 notify.go:221] Checking for updates...
	I1121 14:38:44.399685  396793 config.go:182] Loaded profile config "multinode-346797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:38:44.399707  396793 status.go:174] checking status of multinode-346797 ...
	I1121 14:38:44.400292  396793 cli_runner.go:164] Run: docker container inspect multinode-346797 --format={{.State.Status}}
	I1121 14:38:44.422194  396793 status.go:371] multinode-346797 host status = "Running" (err=<nil>)
	I1121 14:38:44.422221  396793 host.go:66] Checking if "multinode-346797" exists ...
	I1121 14:38:44.422521  396793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-346797
	I1121 14:38:44.442040  396793 host.go:66] Checking if "multinode-346797" exists ...
	I1121 14:38:44.442511  396793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:38:44.442560  396793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-346797
	I1121 14:38:44.462199  396793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/multinode-346797/id_rsa Username:docker}
	I1121 14:38:44.561937  396793 ssh_runner.go:195] Run: systemctl --version
	I1121 14:38:44.568418  396793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:38:44.582510  396793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:38:44.641033  396793 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 14:38:44.630563747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:38:44.641650  396793 kubeconfig.go:125] found "multinode-346797" server: "https://192.168.67.2:8443"
	I1121 14:38:44.641677  396793 api_server.go:166] Checking apiserver status ...
	I1121 14:38:44.641718  396793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:38:44.652899  396793 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1235/cgroup
	I1121 14:38:44.660899  396793 api_server.go:182] apiserver freezer: "11:freezer:/docker/3f0ddec28ce3ed38e0e293caeb5c0c3e13d3b5903d4c388832ad8ab2aab8095b/crio/crio-ec2b7bf87963d2a76e689b0cdc6eb65cf91ac89efc9103ced7752447df027a69"
	I1121 14:38:44.660966  396793 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3f0ddec28ce3ed38e0e293caeb5c0c3e13d3b5903d4c388832ad8ab2aab8095b/crio/crio-ec2b7bf87963d2a76e689b0cdc6eb65cf91ac89efc9103ced7752447df027a69/freezer.state
	I1121 14:38:44.668205  396793 api_server.go:204] freezer state: "THAWED"
	I1121 14:38:44.668232  396793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1121 14:38:44.676956  396793 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1121 14:38:44.676987  396793 status.go:463] multinode-346797 apiserver status = Running (err=<nil>)
	I1121 14:38:44.677000  396793 status.go:176] multinode-346797 status: &{Name:multinode-346797 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:38:44.677018  396793 status.go:174] checking status of multinode-346797-m02 ...
	I1121 14:38:44.677325  396793 cli_runner.go:164] Run: docker container inspect multinode-346797-m02 --format={{.State.Status}}
	I1121 14:38:44.693683  396793 status.go:371] multinode-346797-m02 host status = "Running" (err=<nil>)
	I1121 14:38:44.693705  396793 host.go:66] Checking if "multinode-346797-m02" exists ...
	I1121 14:38:44.694109  396793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-346797-m02
	I1121 14:38:44.711942  396793 host.go:66] Checking if "multinode-346797-m02" exists ...
	I1121 14:38:44.712266  396793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:38:44.712315  396793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-346797-m02
	I1121 14:38:44.735727  396793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21847-289204/.minikube/machines/multinode-346797-m02/id_rsa Username:docker}
	I1121 14:38:44.833985  396793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:38:44.847836  396793 status.go:176] multinode-346797-m02 status: &{Name:multinode-346797-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:38:44.847869  396793 status.go:174] checking status of multinode-346797-m03 ...
	I1121 14:38:44.848165  396793 cli_runner.go:164] Run: docker container inspect multinode-346797-m03 --format={{.State.Status}}
	I1121 14:38:44.871355  396793 status.go:371] multinode-346797-m03 host status = "Stopped" (err=<nil>)
	I1121 14:38:44.871378  396793 status.go:384] host is not running, skipping remaining checks
	I1121 14:38:44.871385  396793 status.go:176] multinode-346797-m03 status: &{Name:multinode-346797-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-346797 node start m03 -v=5 --alsologtostderr: (7.821632977s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-346797
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-346797
E1121 14:39:05.599413  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-346797: (25.139967713s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346797 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-346797 --wait=true -v=5 --alsologtostderr: (50.830204346s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-346797
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-346797 node delete m03: (5.007877166s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-346797 stop: (23.827193384s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-346797 status: exit status 7 (102.359538ms)

                                                
                                                
-- stdout --
	multinode-346797
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-346797-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-346797 status --alsologtostderr: exit status 7 (85.448493ms)

                                                
                                                
-- stdout --
	multinode-346797
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-346797-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:40:39.358107  404591 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:40:39.358435  404591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:39.358450  404591 out.go:374] Setting ErrFile to fd 2...
	I1121 14:40:39.358457  404591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:39.358712  404591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:40:39.358914  404591 out.go:368] Setting JSON to false
	I1121 14:40:39.358951  404591 mustload.go:66] Loading cluster: multinode-346797
	I1121 14:40:39.359392  404591 config.go:182] Loaded profile config "multinode-346797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:39.359410  404591 status.go:174] checking status of multinode-346797 ...
	I1121 14:40:39.359910  404591 cli_runner.go:164] Run: docker container inspect multinode-346797 --format={{.State.Status}}
	I1121 14:40:39.360174  404591 notify.go:221] Checking for updates...
	I1121 14:40:39.378114  404591 status.go:371] multinode-346797 host status = "Stopped" (err=<nil>)
	I1121 14:40:39.378138  404591 status.go:384] host is not running, skipping remaining checks
	I1121 14:40:39.378145  404591 status.go:176] multinode-346797 status: &{Name:multinode-346797 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:40:39.378170  404591 status.go:174] checking status of multinode-346797-m02 ...
	I1121 14:40:39.378479  404591 cli_runner.go:164] Run: docker container inspect multinode-346797-m02 --format={{.State.Status}}
	I1121 14:40:39.395227  404591 status.go:371] multinode-346797-m02 host status = "Stopped" (err=<nil>)
	I1121 14:40:39.395251  404591 status.go:384] host is not running, skipping remaining checks
	I1121 14:40:39.395257  404591 status.go:176] multinode-346797-m02 status: &{Name:multinode-346797-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346797 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1121 14:41:12.227121  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-346797 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.95528656s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346797 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-346797
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346797-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-346797-m02 --driver=docker  --container-runtime=crio: exit status 14 (95.08201ms)

                                                
                                                
-- stdout --
	* [multinode-346797-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-346797-m02' is duplicated with machine name 'multinode-346797-m02' in profile 'multinode-346797'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346797-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-346797-m03 --driver=docker  --container-runtime=crio: (34.151108785s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-346797
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-346797: exit status 80 (365.072076ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-346797 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-346797-m03 already exists in multinode-346797-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-346797-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-346797-m03: (2.108061069s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.77s)

                                                
                                    
x
+
TestPreload (184.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-618325 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-618325 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m25.651874362s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-618325 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-618325 image pull gcr.io/k8s-minikube/busybox: (2.185806814s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-618325
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-618325: (5.915143082s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-618325 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1121 14:44:05.600214  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-618325 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m27.625844586s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-618325 image list
helpers_test.go:175: Cleaning up "test-preload-618325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-618325
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-618325: (2.485226979s)
--- PASS: TestPreload (184.10s)

                                                
                                    
x
+
TestScheduledStopUnix (110.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-983326 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-983326 --memory=3072 --driver=docker  --container-runtime=crio: (33.872691105s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-983326 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1121 14:45:47.238177  418572 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:45:47.238369  418572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:45:47.238403  418572 out.go:374] Setting ErrFile to fd 2...
	I1121 14:45:47.238424  418572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:45:47.239025  418572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:45:47.239384  418572 out.go:368] Setting JSON to false
	I1121 14:45:47.239528  418572 mustload.go:66] Loading cluster: scheduled-stop-983326
	I1121 14:45:47.240081  418572 config.go:182] Loaded profile config "scheduled-stop-983326": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:45:47.240173  418572 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/config.json ...
	I1121 14:45:47.240369  418572 mustload.go:66] Loading cluster: scheduled-stop-983326
	I1121 14:45:47.240540  418572 config.go:182] Loaded profile config "scheduled-stop-983326": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-983326 -n scheduled-stop-983326
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-983326 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1121 14:45:47.677126  418662 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:45:47.677352  418662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:45:47.677381  418662 out.go:374] Setting ErrFile to fd 2...
	I1121 14:45:47.677403  418662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:45:47.677686  418662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:45:47.677997  418662 out.go:368] Setting JSON to false
	I1121 14:45:47.679026  418662 daemonize_unix.go:73] killing process 418588 as it is an old scheduled stop
	I1121 14:45:47.679244  418662 mustload.go:66] Loading cluster: scheduled-stop-983326
	I1121 14:45:47.679663  418662 config.go:182] Loaded profile config "scheduled-stop-983326": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:45:47.679763  418662 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/config.json ...
	I1121 14:45:47.679968  418662 mustload.go:66] Loading cluster: scheduled-stop-983326
	I1121 14:45:47.680105  418662 config.go:182] Loaded profile config "scheduled-stop-983326": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1121 14:45:47.689589  291060 retry.go:31] will retry after 127.703µs: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.690769  291060 retry.go:31] will retry after 173.651µs: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.691897  291060 retry.go:31] will retry after 285.048µs: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.693029  291060 retry.go:31] will retry after 183.546µs: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.694153  291060 retry.go:31] will retry after 376.987µs: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.695274  291060 retry.go:31] will retry after 1.048111ms: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.696434  291060 retry.go:31] will retry after 1.643625ms: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.698632  291060 retry.go:31] will retry after 1.829785ms: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.700813  291060 retry.go:31] will retry after 2.293682ms: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.704017  291060 retry.go:31] will retry after 3.407141ms: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.708229  291060 retry.go:31] will retry after 8.172777ms: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.717470  291060 retry.go:31] will retry after 11.364985ms: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.729704  291060 retry.go:31] will retry after 12.008747ms: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.741859  291060 retry.go:31] will retry after 20.044684ms: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.762023  291060 retry.go:31] will retry after 28.997027ms: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
I1121 14:45:47.791266  291060 retry.go:31] will retry after 31.526998ms: open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-983326 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1121 14:46:12.230273  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-983326 -n scheduled-stop-983326
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-983326
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-983326 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1121 14:46:13.628968  419025 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:46:13.629264  419025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:46:13.629293  419025 out.go:374] Setting ErrFile to fd 2...
	I1121 14:46:13.629316  419025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:46:13.629754  419025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:46:13.630141  419025 out.go:368] Setting JSON to false
	I1121 14:46:13.630276  419025 mustload.go:66] Loading cluster: scheduled-stop-983326
	I1121 14:46:13.630709  419025 config.go:182] Loaded profile config "scheduled-stop-983326": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:46:13.630831  419025 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/scheduled-stop-983326/config.json ...
	I1121 14:46:13.631111  419025 mustload.go:66] Loading cluster: scheduled-stop-983326
	I1121 14:46:13.631283  419025 config.go:182] Loaded profile config "scheduled-stop-983326": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-983326
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-983326: exit status 7 (80.233508ms)

                                                
                                                
-- stdout --
	scheduled-stop-983326
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-983326 -n scheduled-stop-983326
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-983326 -n scheduled-stop-983326: exit status 7 (69.44537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-983326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-983326
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-983326: (5.272099498s)
--- PASS: TestScheduledStopUnix (110.75s)

                                                
                                    
x
+
TestInsufficientStorage (10.78s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-035781 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-035781 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.213903678s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"93509fa8-de9a-4c14-8f79-5c39d0e66324","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-035781] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5aa193f6-d899-42c1-a7db-2b401bbbd695","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21847"}}
	{"specversion":"1.0","id":"58d79a81-dc4a-4341-9f40-b3996e6217cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0f3421e6-5cbf-4511-aed9-c0a261d3b1c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig"}}
	{"specversion":"1.0","id":"9ade6b1f-171c-4758-bbbc-45a0a574e9b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube"}}
	{"specversion":"1.0","id":"40e3abc8-0081-4031-9d23-ab8e0d579577","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"478d9668-dc11-4aca-8fc9-20a3263d8e7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a9f2da70-8594-460b-a070-46b62d56d7ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7d3a2353-044f-4068-99fe-746654b5f652","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0078d177-8ebf-47de-afa5-2082812e1c25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ae2b8fa-4323-40c3-b9ad-8237fc341ee5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3a76fa08-4b9c-48be-8db4-214210cd5e00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-035781\" primary control-plane node in \"insufficient-storage-035781\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbf2f1f9-cc51-4fa3-b9b9-df361a0abbb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763507788-21924 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0ff3cc8-8ff1-4d48-a9f7-abc53eb073d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8b8ed8e-87cf-4c7c-8b9a-62bbdf0be6fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-035781 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-035781 --output=json --layout=cluster: exit status 7 (301.76446ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-035781","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-035781","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1121 14:47:12.564134  420728 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-035781" does not appear in /home/jenkins/minikube-integration/21847-289204/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-035781 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-035781 --output=json --layout=cluster: exit status 7 (310.153416ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-035781","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-035781","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1121 14:47:12.873586  420793 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-035781" does not appear in /home/jenkins/minikube-integration/21847-289204/kubeconfig
	E1121 14:47:12.883644  420793 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/insufficient-storage-035781/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-035781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-035781
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-035781: (1.954590683s)
--- PASS: TestInsufficientStorage (10.78s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1129323758 start -p running-upgrade-913045 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1129323758 start -p running-upgrade-913045 --memory=3072 --vm-driver=docker  --container-runtime=crio: (37.082764972s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-913045 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1121 14:51:12.226679  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-913045 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.578426615s)
helpers_test.go:175: Cleaning up "running-upgrade-913045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-913045
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-913045: (2.052680774s)
--- PASS: TestRunningBinaryUpgrade (69.54s)

                                                
                                    
x
+
TestKubernetesUpgrade (357.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-886613 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1121 14:49:05.599653  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-886613 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.469348476s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-886613
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-886613: (1.608320849s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-886613 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-886613 status --format={{.Host}}: exit status 7 (135.812846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-886613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-886613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.423924606s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-886613 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-886613 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-886613 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (132.730085ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-886613] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-886613
	    minikube start -p kubernetes-upgrade-886613 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8866132 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-886613 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-886613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1121 14:54:05.600159  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:54:15.293104  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-886613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.663024278s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-886613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-886613
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-886613: (2.447269286s)
--- PASS: TestKubernetesUpgrade (357.02s)

                                                
                                    
x
+
TestMissingContainerUpgrade (121.42s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1379853181 start -p missing-upgrade-036945 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1379853181 start -p missing-upgrade-036945 --memory=3072 --driver=docker  --container-runtime=crio: (1m8.790868974s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-036945
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-036945
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-036945 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-036945 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.419419492s)
helpers_test.go:175: Cleaning up "missing-upgrade-036945" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-036945
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-036945: (2.199925426s)
--- PASS: TestMissingContainerUpgrade (121.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-140266 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-140266 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (109.482534ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-140266] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-140266 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-140266 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.484155855s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-140266 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-140266 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-140266 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.437828965s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-140266 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-140266 status -o json: exit status 2 (393.182936ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-140266","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-140266
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-140266: (2.183742888s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-140266 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-140266 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.018425762s)
--- PASS: TestNoKubernetes/serial/Start (10.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21847-289204/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-140266 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-140266 "sudo systemctl is-active --quiet service kubelet": exit status 1 (334.675325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-140266
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-140266: (1.366885852s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-140266 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-140266 --driver=docker  --container-runtime=crio: (7.734621162s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-140266 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-140266 "sudo systemctl is-active --quiet service kubelet": exit status 1 (325.279947ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (62.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.561043615 start -p stopped-upgrade-489557 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.561043615 start -p stopped-upgrade-489557 --memory=3072 --vm-driver=docker  --container-runtime=crio: (40.705128487s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.561043615 -p stopped-upgrade-489557 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.561043615 -p stopped-upgrade-489557 stop: (1.53657646s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-489557 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-489557 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.348905032s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (62.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-489557
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-489557: (1.246948537s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                    
x
+
TestPause/serial/Start (82.19s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-706190 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-706190 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.191638601s)
--- PASS: TestPause/serial/Start (82.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.91s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-706190 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-706190 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.885269528s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-609503 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-609503 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (313.736767ms)

                                                
                                                
-- stdout --
	* [false-609503] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:54:23.593224  459089 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:54:23.593465  459089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:54:23.593496  459089 out.go:374] Setting ErrFile to fd 2...
	I1121 14:54:23.593518  459089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:54:23.593832  459089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-289204/.minikube/bin
	I1121 14:54:23.594340  459089 out.go:368] Setting JSON to false
	I1121 14:54:23.595370  459089 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9415,"bootTime":1763727448,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1121 14:54:23.595484  459089 start.go:143] virtualization:  
	I1121 14:54:23.603026  459089 out.go:179] * [false-609503] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:54:23.606102  459089 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:54:23.606174  459089 notify.go:221] Checking for updates...
	I1121 14:54:23.612585  459089 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:54:23.615493  459089 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-289204/kubeconfig
	I1121 14:54:23.618465  459089 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-289204/.minikube
	I1121 14:54:23.621530  459089 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:54:23.624601  459089 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:54:23.628117  459089 config.go:182] Loaded profile config "kubernetes-upgrade-886613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:54:23.628263  459089 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:54:23.686478  459089 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:54:23.686646  459089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:54:23.822943  459089 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:54:23.803691445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:54:23.823066  459089 docker.go:319] overlay module found
	I1121 14:54:23.826253  459089 out.go:179] * Using the docker driver based on user configuration
	I1121 14:54:23.829188  459089 start.go:309] selected driver: docker
	I1121 14:54:23.829220  459089 start.go:930] validating driver "docker" against <nil>
	I1121 14:54:23.829250  459089 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:54:23.832988  459089 out.go:203] 
	W1121 14:54:23.835784  459089 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1121 14:54:23.838691  459089 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-609503 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-609503

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-609503

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-609503

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-609503

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-609503

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-609503

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-609503

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-609503

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-609503

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-609503

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-609503

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-609503" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-609503" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:53:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-886613
contexts:
- context:
cluster: kubernetes-upgrade-886613
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:53:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-886613
name: kubernetes-upgrade-886613
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-886613
user:
client-certificate: /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/kubernetes-upgrade-886613/client.crt
client-key: /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/kubernetes-upgrade-886613/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-609503

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609503"

                                                
                                                
----------------------- debugLogs end: false-609503 [took: 4.759243811s] --------------------------------
helpers_test.go:175: Cleaning up "false-609503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-609503
--- PASS: TestNetworkPlugins/group/false (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1121 14:56:12.226219  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.840509674s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-357479 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fc05db92-ca5b-43e5-a59d-474356b5cfa5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fc05db92-ca5b-43e5-a59d-474356b5cfa5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003359953s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-357479 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-357479 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-357479 --alsologtostderr -v=3: (12.01210051s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-357479 -n old-k8s-version-357479
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-357479 -n old-k8s-version-357479: exit status 7 (79.180375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-357479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-357479 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.374568006s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-357479 -n old-k8s-version-357479
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-87tjm" [cc172b90-47f0-4d9f-a696-97f474da198a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003344392s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-87tjm" [cc172b90-47f0-4d9f-a696-97f474da198a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004333899s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-357479 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-357479 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m28.109597013s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1121 14:59:05.599695  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m14.134239316s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-844780 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e4d9947f-b15c-4e85-a63a-57d09cacf149] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e4d9947f-b15c-4e85-a63a-57d09cacf149] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.015045682s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-844780 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-844780 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-844780 --alsologtostderr -v=3: (12.151459567s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-902161 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9a929447-7041-4d25-a008-51ccf9c7f5e2] Pending
helpers_test.go:352: "busybox" [9a929447-7041-4d25-a008-51ccf9c7f5e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9a929447-7041-4d25-a008-51ccf9c7f5e2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003428568s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-902161 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-844780 -n no-preload-844780
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-844780 -n no-preload-844780: exit status 7 (101.881427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-844780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-902161 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-902161 --alsologtostderr -v=3: (12.201132697s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-844780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.18073541s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-844780 -n no-preload-844780
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-902161 -n embed-certs-902161
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-902161 -n embed-certs-902161: exit status 7 (138.701888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-902161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (59.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1121 15:01:12.226378  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-902161 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (59.370946796s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-902161 -n embed-certs-902161
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (59.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6gjq5" [9bd7c0e3-0f0c-48da-b343-e3b558c82dcc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00373647s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6gjq5" [9bd7c0e3-0f0c-48da-b343-e3b558c82dcc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003888895s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-844780 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-844780 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rlwns" [81841e6d-5253-428b-8f5f-98af5f095bfc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003725572s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.153685538s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rlwns" [81841e6d-5253-428b-8f5f-98af5f095bfc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003861617s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-902161 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-902161 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1121 15:01:57.769796  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:01:57.776165  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:01:57.787535  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:01:57.808973  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:01:57.850333  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:01:57.932566  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:01:58.094017  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:01:58.415520  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:01:59.057264  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:02:00.346303  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:02:02.908511  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:02:08.030206  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:02:18.271557  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.545219528s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-714993 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-714993 --alsologtostderr -v=3: (1.372011607s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-714993 -n newest-cni-714993
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-714993 -n newest-cni-714993: exit status 7 (84.395878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-714993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-714993 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.909437831s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-714993 -n newest-cni-714993
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-714993 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-124330 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [878764d7-809e-440c-a237-6313950ee921] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [878764d7-809e-440c-a237-6313950ee921] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003593805s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-124330 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m28.902771561s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-124330 --alsologtostderr -v=3
E1121 15:03:19.714788  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-124330 --alsologtostderr -v=3: (12.161678061s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330: exit status 7 (121.302911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-124330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1121 15:04:05.599665  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/addons-494116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-124330 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m3.300629629s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-124330 -n default-k8s-diff-port-124330
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8j9t6" [e8eeec4c-0209-4d3a-bf07-5706e2abe27e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003620307s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8j9t6" [e8eeec4c-0209-4d3a-bf07-5706e2abe27e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003874492s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-124330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-609503 "pgrep -a kubelet"
I1121 15:04:37.253993  291060 config.go:182] Loaded profile config "auto-609503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-609503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4mcgd" [920403f9-b4ff-4c08-9851-419df87dfd1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4mcgd" [920403f9-b4ff-4c08-9851-419df87dfd1c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003330736s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-124330 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-609503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1121 15:04:52.368615  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:04:52.374966  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:04:52.386326  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:04:52.407689  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:04:52.449057  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:04:52.530417  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:04:52.691863  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:04:53.013506  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:04:53.655547  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:04:54.938792  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:04:57.502857  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:05:02.624901  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m27.978625508s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1121 15:05:33.349417  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:06:12.226716  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/functional-939098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:06:14.311571  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m7.124297142s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-v5cwx" [51321374-bd56-4b1b-b10f-1f723064f0d1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003266842s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-rfwtl" [951364f7-6915-4af5-a956-3f693b5780ff] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-rfwtl" [951364f7-6915-4af5-a956-3f693b5780ff] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00438022s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-609503 "pgrep -a kubelet"
I1121 15:06:24.252180  291060 config.go:182] Loaded profile config "kindnet-609503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-609503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mbgf6" [c470747e-af46-4a02-bd57-e86e70b088c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mbgf6" [c470747e-af46-4a02-bd57-e86e70b088c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.002978251s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-609503 "pgrep -a kubelet"
I1121 15:06:27.136480  291060 config.go:182] Loaded profile config "calico-609503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-609503 replace --force -f testdata/netcat-deployment.yaml
I1121 15:06:27.523937  291060 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dm9h8" [392dcc23-7a5b-40f7-b6ef-2eada93b5037] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dm9h8" [392dcc23-7a5b-40f7-b6ef-2eada93b5037] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005680626s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-609503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-609503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m5.961034239s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1121 15:07:25.479120  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/old-k8s-version-357479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:07:36.232979  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:07:59.840519  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:07:59.847011  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:07:59.858459  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:07:59.879829  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:07:59.921932  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:08:00.012878  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:08:00.174389  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:08:00.498271  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:08:01.140443  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:08:02.422055  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:08:04.984531  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m23.814871034s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-609503 "pgrep -a kubelet"
I1121 15:08:07.864653  291060 config.go:182] Loaded profile config "custom-flannel-609503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-609503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-httrd" [5edc1932-1432-4165-84ed-f4a4e8e26bb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1121 15:08:10.106819  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-httrd" [5edc1932-1432-4165-84ed-f4a4e8e26bb8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003974093s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-609503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-609503 "pgrep -a kubelet"
I1121 15:08:30.672624  291060 config.go:182] Loaded profile config "enable-default-cni-609503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-609503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4r7sz" [4d137f9e-8cdb-4b72-bb42-59054b233e2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4r7sz" [4d137f9e-8cdb-4b72-bb42-59054b233e2a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00423421s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1121 15:08:40.830077  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.940745396s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-609503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1121 15:09:21.792200  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/default-k8s-diff-port-124330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:09:37.523878  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:09:37.531840  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:09:37.543125  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:09:37.564496  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:09:37.605784  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:09:37.687164  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:09:37.848733  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:09:38.170672  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:09:38.812982  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:09:40.094362  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:09:42.656714  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 15:09:47.779067  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-609503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m15.233864092s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-chtzs" [4d933809-ab07-4466-8168-ac023f95d77c] Running
E1121 15:09:52.368527  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/no-preload-844780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003432062s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-609503 "pgrep -a kubelet"
I1121 15:09:54.657016  291060 config.go:182] Loaded profile config "flannel-609503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-609503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jkhm4" [13b4d9a9-cf68-49b9-9316-d9db9bec90f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1121 15:09:58.020604  291060 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/auto-609503/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-jkhm4" [13b4d9a9-cf68-49b9-9316-d9db9bec90f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003386571s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-609503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-609503 "pgrep -a kubelet"
I1121 15:10:21.836041  291060 config.go:182] Loaded profile config "bridge-609503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-609503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2mq2z" [b69b53d8-0525-4f9d-a571-60d65aecdafc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2mq2z" [b69b53d8-0525-4f9d-a571-60d65aecdafc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.007818464s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-609503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-609503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-223827 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-223827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-223827
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-984933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-984933
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-609503 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-609503

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-609503

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-609503

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-609503

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-609503

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-609503

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-609503

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-609503

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-609503

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-609503

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-609503

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-609503" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-609503" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-289204/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:53:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-886613
contexts:
- context:
cluster: kubernetes-upgrade-886613
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:53:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-886613
name: kubernetes-upgrade-886613
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-886613
user:
client-certificate: /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/kubernetes-upgrade-886613/client.crt
client-key: /home/jenkins/minikube-integration/21847-289204/.minikube/profiles/kubernetes-upgrade-886613/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-609503

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609503"

                                                
                                                
----------------------- debugLogs end: kubenet-609503 [took: 4.563178934s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-609503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-609503
--- SKIP: TestNetworkPlugins/group/kubenet (4.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-609503 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-609503" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-609503

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-609503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609503"

                                                
                                                
----------------------- debugLogs end: cilium-609503 [took: 5.794257304s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-609503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-609503
--- SKIP: TestNetworkPlugins/group/cilium (6.00s)

                                                
                                    
Copied to clipboard